939 resultados para Risk based maintenance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report documents Phase IV of the Highway Maintenance Concept Vehicle (HMCV) project, a pooled fund study sponsored by the Departments of Transportation of Iowa, Pennsylvania, and Wisconsin. This report provides the background, including a brief history of the earlier phases of the project, a systems overview, and descriptions of the research conducted in Phase IV. Finally, the report provides conclusions and recommendations for future research. Background The goal of the Highway Maintenance Concept Vehicle Pooled Fund Study is to provide travelers with the level of service defined by policy during the winter season at the least cost to taxpayers. This goal is to be accomplished by using information regarding actual road conditions to facilitate and adjust snow and ice control activities. The approach used in this study was to bring technology applications from other industries to the highway maintenance vehicle. This approach is evolutionary in that as emerging technologies and applications are found to be acceptable to the pooled fund states and as they appear that to have potential for supporting the study goals they become candidates for our research. The objective of Phase IV is to: Conduct limited deployment of selected technologies from Phase III by equipping a vehicle with proven advanced technologies and creating a mobile test laboratory for collecting road weather data. The research quickly pointed out that investments in winter storm maintenance assets must be based on benefit/cost analysis and related to improving level of service. For example, Iowa has estimated the average cost of fighting a winter storm to be about $60,000 to $70,000 per hour typically. The maintenance concept vehicle will have advanced technology equipment capable of applying precisely the correct amount of material, accurately tailored to the existing and predicted pavement conditions. Hence, a state using advanced technology could expect to have a noticeable impact on the average time taken to establish the winter driving service level. If the concept vehicle and data produced by the vehicle are used to support decision-making leading to reducing material usage and the average time by one hour, a reasonable benefit/cost will result. Data from the friction meter can be used to monitor and adjust snow and ice control activities and inform travelers of pavement surface conditions. Therefore, final selection of successfully performing technologies will be based on the foundation statements and criteria developed by the study team.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Depression has been consistently reported in people with epilepsy. Several studies also suggest a higher burden of cardiovascular diseases. We therefore analysed psychosocial co-morbidity and cardiovascular risk factors in patients with a lifetime history of epilepsy in the PsyCoLaus study, a Swiss urban population-based assessment of mental health and cardiovascular risk factors in adults aged between 35 and 66 years. PATIENTS AND METHODS: Among 3719 participants in the PsyCoLaus study, we retrospectively identified those reporting at least 2 unprovoked seizures, defined as epilepsy. These subjects were compared to all others regarding psychiatric, social, and cardiovascular risk factors data using uni- and multivariable assessments. RESULTS: A significant higher need for social help (p<0.001) represented the only independent difference between 43 subjects with a history of epilepsy and 3676 controls, while a higher prevalence of psychiatric co-morbidities (p=0.015) and a lower prevalent marital status (p=0.01) were only significant on univariate analyses. Depression and cardio-vascular risk factors, as well as educational level and employment, were similar among the groups. CONCLUSIONS: This analysis confirms an increased prevalence of psychosocial burden in subjects with a lifetime history of epilepsy; conversely, we did not find a higher cardiovascular risk. The specific urban and geographical location of our cohort and the age span of the studied population may account for the differences from previous studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The State of Santa Catarina, Brazil, has agricultural and livestock activities, such as pig farming, that are responsible for adding large amounts of phosphorus (P) to soils. However, a method is required to evaluate the environmental risk of these high soil P levels. One possible method for evaluating the environmental risk of P fertilization, whether organic or mineral, is to establish threshold levels of soil available P, measured by Mehlich-1 extractions, below which there is not a high risk of P transfer from the soil to surface waters. However, the Mehlich-1 extractant is sensitive to soil clay content, and that factor should be considered when establishing such P-thresholds. The objective of this study was to determine P-thresholds using the Mehlich-1 extractant for soils with different clay contents in the State of Santa Catarina, Brazil. Soil from the B-horizon of an Oxisol with 800 g kg-1 clay was mixed with different amounts of sand to prepare artificial soils with 200, 400, 600, and 800 g kg-1 clay. The artificial soils were incubated for 30 days with moisture content at 80 % of field capacity to stabilize their physicochemical properties, followed by additional incubation for 30 days after liming to raise the pH(H2O) to 6.0. Soil P sorption curves were produced, and the maximum sorption (Pmax) was determined using the Langmuir model for each soil texture evaluated. Based on the Pmax values, seven rates of P were added to four replicates of each soil, and incubated for 20 days more. Following incubation, available P contents (P-Mehlich-1) and P dissolved in the soil solution (P-water) were determined. A change-point value (the P-Mehlich-1 value above which P-water starts increasing sharply) was calculated through the use of segmented equations. The maximum level of P that a soil might safely adsorb (P-threshold) was defined as 80 % of the change-point value to maintain a margin for environmental safety. The P-threshold value, in mg dm-3, was dependent on the soil clay content according to the model P-threshold = 40 + Clay, where the soil clay content is expressed as a percentage. The model was tested in 82 diverse soil samples from the State of Santa Catarina and was able to distinguish samples with high and low environmental risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La prévention primaire des maladies cardiovasculaires par les médecins s'effectue par une prise en charge individualisée des facteurs de risque. L'indication à un traitement par statines se base sur une estimation du risque de survenue d'une maladie cardiovasculaire et sur le taux de LDL-cholestérol. Trois scores de risque sont couramment utilisés: le score PROCAM, le score Framingham, et le SCORE européen. En Suisse, le Groupe Suisse Lipides et Athérosclérose (GSLA) recommande en première instance l'utilisation du score PROCAM avec une adaptation du niveau de risque pour la Suisse. Une enquête a aussi montré que c'est le score le plus utilisé en Suisse. Dans cet article, les particularités de ces scores et leurs applications pratiques en ce qui concerne la prescription de statines en prévention primaire sont discutées. Les conséquences et les bénéfices potentiels de l'application de ces scores en Suisse sont également abordés. [Abstract] Primary prevention of cardiovascular disease by physicians is achieved by management of individual risk factors. The eligibility for treatment with statins is based on both an estimate of the risk of developing cardiovascular disease and the LDL-cholesterol. Three risk scores are commonly used : the PROCAM score, the Framingham score, and the European score. In Switzerland, the Swiss Group Lipids and Atherosclerosis (GSLA) recommends to use the PROCAM score with an adjustment of the level of risk for Switzerland. A survey also showed that PROCAM is the most used in Switzerland. In this article, the differences of these scores and their practical applications regarding the prescription of statins in primary prevention are discussed. The consequences and potential benefits of applying these scores in Switzerland are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STUDY OBJECTIVES: To evaluate the association between objective sleep measures and metabolic syndrome (MS), hypertension, diabetes, and obesity. DESIGN: Cross-sectional study. SETTING: General population sample. PARTICIPANTS: There were 2,162 patients (51.2% women, mean age 58.4 ± 11.1). INTERVENTIONS: Patients were evaluated for hypertension, diabetes, overweight/obesity, and MS, and underwent a full polysomnography (PSG). MEASUREMENTS AND RESULTS: PSG measured variables included: total sleep time (TST), percentage and time spent in slow wave sleep (SWS) and in rapid eye movement (REM) sleep, sleep efficiency and arousal index (ArI). In univariate analyses, MS was associated with decreased TST, SWS, REM sleep, and sleep efficiency, and increased ArI. After adjustment for age, sex, smoking, alcohol, physical activity, drugs that affect sleep and depression, the ArI remained significantly higher, but the difference disappeared in patients without significant sleep disordered breathing (SDB). Differences in sleep structure were also found according to the presence or absence of hypertension, diabetes, and overweight/obesity in univariate analysis. However, these differences were attenuated after multivariate adjustment and after excluding subjects with significant SDB. CONCLUSIONS: In this population-based sample we found significant associations between sleep structure and MS, hypertension, diabetes, and obesity. However, these associations were cancelled after multivariate adjustment. We conclude that normal variations in sleep contribute little if any to MS and associated disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the prevalence of cardiovascular (CV) risk factors in Seychelles, a middle-income African country, and compare the cost-effectiveness of single-risk-factor management (treating individuals with arterial blood pressure >/= 140/90 mmHg and/or total serum cholesterol >/= 6.2 mmol/l) with that of management based on total CV risk (treating individuals with a total CV risk >/= 10% or >/= 20%).METHODS: CV risk factor prevalence and a CV risk prediction chart for Africa were used to estimate the 10-year risk of suffering a fatal or non-fatal CV event among individuals aged 40-64 years. These figures were used to compare single-risk-factor management with total risk management in terms of the number of people requiring treatment to avert one CV event and the number of events potentially averted over 10 years. Treatment for patients with high total CV risk (>/= 20%) was assumed to consist of a fixed-dose combination of several drugs (polypill). Cost analyses were limited to medication.FINDINGS: A total CV risk of >/= 10% and >/= 20% was found among 10.8% and 5.1% of individuals, respectively. With single-risk-factor management, 60% of adults would need to be treated and 157 cardiovascular events per 100 000 population would be averted per year, as opposed to 5% of adults and 92 events with total CV risk management. Management based on high total CV risk optimizes the balance between the number requiring treatment and the number of CV events averted.CONCLUSION: Total CV risk management is much more cost-effective than single-risk-factor management. These findings are relevant for all countries, but especially for those economically and demographically similar to Seychelles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What do we know about the effectiveness of various treatments of alcoholism? This review of literature shows that lack--or weaknesses--of published studies make it impossible to draw definite conclusions. Rigorous controlled studies show high rates of spontaneous remission and important uncertainties about specialised treatments of alcoholism. However, except for severe dependence that may well require a different approach, brief interventions conducted by non-specialists have proved highly effective for at-risk alcohol drinkers: based on minimal medical advice, they increase the chances of lowering alcohol consumption. General practitioners may thus represent on important link in the therapeutic chain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroticism is a moderately heritable personality trait considered to be a risk factor for developing major depression, anxiety disorders and dementia. We performed a genome-wide association study in 2,235 participants drawn from a population-based study of neuroticism, making this the largest association study for neuroticism to date. Neuroticism was measured by the Eysenck Personality Questionnaire. After Quality Control, we analysed 430,000 autosomal SNPs together with an additional 1.2 million SNPs imputed with high quality from the Hap Map CEU samples. We found a very small effect of population stratification, corrected using one principal component, and some cryptic kinship that required no correction. NKAIN2 showed suggestive evidence of association with neuroticism as a main effect (p < 10(-6)) and GPC6 showed suggestive evidence for interaction with age (p approximately = 10(-7)). We found support for one previously-reported association (PDE4D), but failed to replicate other recent reports. These results suggest common SNP variation does not strongly influence neuroticism. Our study was powered to detect almost all SNPs explaining at least 2% of heritability, and so our results effectively exclude the existence of loci having a major effect on neuroticism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND & AIMS: Development of strictures is a major concern for patients with eosinophilic esophagitis (EoE). At diagnosis, EoE can present with an inflammatory phenotype (characterized by whitish exudates, furrows, and edema), a stricturing phenotype (characterized by rings and stenosis), or a combination of these. Little is known about progression of stricture formation; we evaluated stricture development over time in the absence of treatment and investigated risk factors for stricture formation. METHODS: We performed a retrospective study using the Swiss EoE Database, collecting data on 200 patients with symptomatic EoE (153 men; mean age at diagnosis, 39 ± 15 years old). Stricture severity was graded based on the degree of difficulty associated with passing of the standard adult endoscope. RESULTS: The median delay in diagnosis of EoE was 6 years (interquartile range, 2-12 years). With increasing duration of delay in diagnosis, the prevalence of fibrotic features of EoE, based on endoscopy, increased from 46.5% (diagnostic delay, 0-2 years) to 87.5% (diagnostic delay, >20 years; P = .020). Similarly, the prevalence of esophageal strictures increased with duration of diagnostic delay, from 17.2% (diagnostic delay, 0-2 years) to 70.8% (diagnostic delay, >20 years; P < .001). Diagnostic delay was the only risk factor for strictures at the time of EoE diagnosis (odds ratio = 1.08; 95% confidence interval: 1.040-1.122; P < .001). CONCLUSIONS: The prevalence of esophageal strictures correlates with the duration of untreated disease. These findings indicate the need to minimize delay in diagnosis of EoE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac rhabdomyomas are benign cardiac tumours with few cardiac complications, but with a known association to tuberous sclerosis that affects the neurologic outcome of the patients. We have analysed the long-term cardiac and neurological outcomes of patients with cardiac rhabdomyomas in order to allow comprehensive prenatal counselling, basing our findings on the records of all patients seen prenatally and postnatally with an echocardiographic diagnosis of cardiac rhabdomyoma encountered from August, 1982, to September, 2007. We analysed factors such as the number and the location of the tumours to establish their association with a diagnosis of tuberous sclerosis, predicting the cardiac and neurologic outcomes for the patients.Cardiac complications include arrhythmias, obstruction of the ventricular outflow tracts, and secondary cardiogenic shock. Arrhythmias were encountered most often during the neonatal period, with supraventricular tachycardia being the commonest rhythm disturbance identified. No specific dimension or location of the cardiac rhabdomyomas predicted the disturbances of rhythm.The importance of the diagnosis of tuberous sclerosis is exemplified by the neurodevelopmental complications, with four-fifths of the patients showing epilepsy, and two-thirds having delayed development. The presence of multiple cardiac tumours suggested a higher risk of being affected by tuberous sclerosis. The tumours generally regress after birth, and cardiac-related problems are rare after the perinatal period. Tuberous sclerosis and the associated neurodevelopmental complications dominate the clinical picture, and should form an important aspect of the prenatal counselling of parents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Post-surgical management of stage I seminoma includes: surveillance with repeated CT-scans and treatment reserved for those who relapse, or adjuvant treatment with either immediate radiation therapy (RT) or carboplatin. The cancer specific survival is close to 100%. Cure without long-term sequelae of treatment is the aim. Our goal is to estimate the risk of radiation-induced secondary cancers (SC) death from for patients undergoing S, adjuvant RT or adjuvant carboplatin (AC).Materials and Methods: We measured organ doses from CT scans (3 phases each one) of a seminoma patient who was part of the active surveillance strategy and from a man undergoing adjuvant RT 20-Gy and a 30-Gy salvage RT treatment to the para-aortic area using helical Intensity Modulated RT (Tomotherapy®) with accurate delineation of organs at risk and a CTV to PTV expansion of 1 cm. Effective doses to organs in mSv were estimated according to the tissue-weighting factors recommendations of the International Commission on Radiological Protection 103 (Ann ICRP 2007). We estimated SC incidence and mortality for a 10,000 people population based on the excess absolute risk model from the Biological Effects of Ionizing Radiation (BEIR) VII (Health Risk of Exposure to Low Levels of Ionizing Radiation, NCR, The National Academies Press Washington, DC, 2006) assuming a seminoma diagnosis at age 30, a total life expectancy of 80 years.Results: The nominal risk for a fatal secondary cancers was calculated 1.5% for 15 abdominal CT scans, 14.8% for adjuvant RT (20 Gy paraaortic field) and 22.2% for salvage RT (30 Gy). The calculation assumed that the risk of relapse on surveillance and adjuvant AC was 15% and 4% respectively and that all patients were salvaged at relapse with RT. n CT abdomen/Pelvis = secondary cancer % RT Dose and % receiving treatment = secondary cancer % Total secondary cancer risk in % Active surveillance 15 = 1.5% 30 Gy in 15% of pts = 3.3% 4.8 Adjuvant carboplatin 7 = 0.7% 30 Gy in 4% of pts = 0.88% 1.58 Adjuvant radiotherapy 7 = 0.7% 20 Gy in 100% of pts = 14.8% 15.5Conclusions: These data suggest that: 1) Adjuvant radiotherapy is harmful and should not anymore be regarded as a standard option for seminoma stage I. 2) AC seems to be an option to reduce radiation induced cancers. Limitations: the study does not consider secondary cancers due to chemotherapy with AC (unknown). The use of BEIR VII for risk modeling with higher doses of RT needs to be validated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recommendations for statin use for primary prevention of coronary heart disease (CHD) are based on estimation of the 10-year CHD risk. It is unclear which risk algorithm and guidelines should be used in European populations. Using data from a population-based study in Switzerland, we first assessed 10-year CHD risk and eligibility for statins in 5,683 women and men 35 to 75 years of age without cardiovascular disease by comparing recommendations by the European Society of Cardiology without and with extrapolation of risk to age 60 years, the International Atherosclerosis Society, and the US Adult Treatment Panel III. The proportions of participants classified as high-risk for CHD were 12.5% (15.4% with extrapolation), 3.0%, and 5.8%, respectively. Proportions of participants eligible for statins were 9.2% (11.6% with extrapolation), 13.7%, and 16.7%, respectively. Assuming full compliance to each guideline, expected relative decreases in CHD deaths in Switzerland over a 10-year period would be 16.4% (17.5% with extrapolation), 18.7%, and 19.3%, respectively; the corresponding numbers needed to treat to prevent 1 CHD death would be 285 (340 with extrapolation), 380, and 440, respectively. In conclusion, the proportion of subjects classified as high risk for CHD varied over a fivefold range across recommendations. Following the International Atherosclerosis Society and the Adult Treatment Panel III recommendations might prevent more CHD deaths at the cost of higher numbers needed to treat compared with European Society of Cardiology guidelines.