950 resultados para value at risk
Resumo:
We prospectively compared the diagnostic value of C-reactive protein (CRP) and white blood cell counts for detection of neonatal septicaemia. Sensitivity and specifity in receiver operating characteristics, and positive and negative predictive value of CRP and white blood cell count were compared in 195 critically ill preterm and term newborns clinically suspected of infection. Blood cultures were positive in 33 cases. During the first 3 days after birth CRP elevation (sensitivity 75%, specifity 86%), leukopenia (67%/90%), neutropenia (78%/80%) and immature to total neutrophil count (I/T) ratio (78%/73%) were good diagnostic parameters, as opposed to band forms with absolute count (84%/66%) or percentage (79%/71%), thrombocytopenia (65%/57%) and toxic granulations (44%/94%). Beyond 3 days of age elevated CRP (88%/87%) was the best parameter. Increased total (84%/66%) or percentage band count (79%/71%) were also useful. Leukocytosis (74%/56%), increased neutrophils (67%/65%), I/T ratio (79%/47%), thrombocytopenia (65%/57%) and toxic granulations had a low specifity. The positive predictive value of CRP was 32% before and 37% after 3 days of age, that of leukopenia was 37% in the first 3 days. CONCLUSION: During the first 3 days of life CRP, leukopenia and neutropenia were comparably good tests while after 3 days of life CRP was the best single test in early detection of neonatal septicaemia. Serial CRP estimations confirm the diagnosis, monitor the course of infection and the efficacy of antibiotic treatment.
Resumo:
Postoperative delirium after cardiac surgery is associated with increased morbidity and mortality as well as prolonged stay in both the intensive care unit and the hospital. The authors sought to identify modifiable risk factors associated with the development of postoperative delirium in elderly patients after elective cardiac surgery in order to be able to design follow-up studies aimed at the prevention of delirium by optimizing perioperative management. A post hoc analysis of data from patients enrolled in a randomized controlled trial was performed. A single university hospital. One hundred thirteen patients aged 65 or older undergoing elective cardiac surgery with cardiopulmonary bypass. None. MEASUREMENTS AND MAINS RESULTS: Screening for delirium was performed using the Confusion Assessment Method (CAM) on the first 6 postoperative days. A multivariable logistic regression model was developed to identify significant risk factors and to control for confounders. Delirium developed in 35 of 113 patients (30%). The multivariable model showed the maximum value of C-reactive protein measured postoperatively, the dose of fentanyl per kilogram of body weight administered intraoperatively, and the duration of mechanical ventilation to be independently associated with delirium. In this post hoc analysis, larger doses of fentanyl administered intraoperatively and longer duration of mechanical ventilation were associated with postoperative delirium in the elderly after cardiac surgery. Prospective randomized trials should be performed to test the hypotheses that a reduced dose of fentanyl administered intraoperatively, the use of a different opioid, or weaning protocols aimed at early extubation prevent delirium in these patients.
Resumo:
BACKGROUND: Multiple risk prediction models have been validated in all-age patients presenting with acute coronary syndrome (ACS) and treated with percutaneous coronary intervention (PCI); however, they have not been validated specifically in the elderly. METHODS: We calculated the GRACE (Global Registry of Acute Coronary Events) score, the logistic EuroSCORE, the AMIS (Acute Myocardial Infarction Swiss registry) score, and the SYNTAX (Synergy between Percutaneous Coronary Intervention with TAXUS and Cardiac Surgery) score in a consecutive series of 114 patients ≥75 years presenting with ACS and treated with PCI within 24 hours of hospital admission. Patients were stratified according to score tertiles and analysed retrospectively by comparing the lower/mid tertiles as an aggregate group with the higher tertile group. The primary endpoint was 30-day mortality. Secondary endpoints were the composite of death and major adverse cardiovascular events (MACE) at 30 days, and 1-year MACE-free survival. Model discrimination ability was assessed using the area under receiver operating characteristic curve (AUC). RESULTS: Thirty-day mortality was higher in the upper tertile compared with the aggregate lower/mid tertiles according to the logistic EuroSCORE (42% vs 5%; odds ratio [OR] = 14, 95% confidence interval [CI] = 4-48; p <0.001; AUC = 0.79), the GRACE score (40% vs 4%; OR = 17, 95% CI = 4-64; p <0.001; AUC = 0.80), the AMIS score (40% vs 4%; OR = 16, 95% CI = 4-63; p <0.001; AUC = 0.80), and the SYNTAX score (37% vs 5%; OR = 11, 95% CI = 3-37; p <0.001; AUC = 0.77). CONCLUSIONS: In elderly patients presenting with ACS and referred to PCI within 24 hours of admission, the GRACE score, the EuroSCORE, the AMIS score, and the SYNTAX score predicted 30 day mortality. The predictive value of clinical scores was improved by using them in combination.
Resumo:
Criteria to decide which patients with rheumatoid arthritis (RA) should be examined by dual energy x ray absorptiometry (DXA) are currently not available. The rheumatologists from Amsterdam have proposed preliminary criteria based on clinical risk factors (age, disease activity, and functional status). These criteria are preliminary and not widely accepted but might be helpful in practice. The value of the proposal in a group of Spanish postmenopausal women with RA is analysed. METHODS DXA (lumbar spine and femoral neck) was performed in 128 patients recruited from a clinical setting, and the proposed criteria were applied. T and Z scores were established for a Spanish reference population. RESULTS The mean (SD) age of the patients was 61.3 (10.7) and mean duration of the postmenopausal period 14.5 (10.1) years. Mean duration of RA was 13.7 (7.7) years. Mean C reactive protein was 22 (21) mg/l; mean erythrocyte sedimentation rate 26 (18) mm/1st h; and mean Health Assessment Questionnaire score 1.25 (0.79). Ninety (70%) patients fulfilled the proposed criteria. Their sensitivity for the diagnosis of osteoporosis (T score ¿¿2.5 SD) was 86% and their specificity, 43%. Positive predictive value was 54% and negative predictive value, 79%. CONCLUSIONS The proposed criteria seem a good screening method for the selection of those patients with RA whose bone mineral density should be assessed as the sensitivity and negative predictive value are acceptable.
Resumo:
BACKGROUND: Fractional flow reserve (FFR) has become an established tool for guiding treatment, but its graded relationship to clinical outcomes as modulated by medical therapy versus revascularization remains unclear. OBJECTIVES: The study hypothesized that FFR displays a continuous relationship between its numeric value and prognosis, such that lower FFR values confer a higher risk and therefore receive larger absolute benefits from revascularization. METHODS: Meta-analysis of study- and patient-level data investigated prognosis after FFR measurement. An interaction term between FFR and revascularization status allowed for an outcomes-based threshold. RESULTS: A total of 9,173 (study-level) and 6,961 (patient-level) lesions were included with a median follow-up of 16 and 14 months, respectively. Clinical events increased as FFR decreased, and revascularization showed larger net benefit for lower baseline FFR values. Outcomes-derived FFR thresholds generally occurred around the range 0.75 to 0.80, although limited due to confounding by indication. FFR measured immediately after stenting also showed an inverse relationship with prognosis (hazard ratio: 0.86, 95% confidence interval: 0.80 to 0.93; p < 0.001). An FFR-assisted strategy led to revascularization roughly half as often as an anatomy-based strategy, but with 20% fewer adverse events and 10% better angina relief. CONCLUSIONS: FFR demonstrates a continuous and independent relationship with subsequent outcomes, modulated by medical therapy versus revascularization. Lesions with lower FFR values receive larger absolute benefits from revascularization. Measurement of FFR immediately after stenting also shows an inverse gradient of risk, likely from residual diffuse disease. An FFR-guided revascularization strategy significantly reduces events and increases freedom from angina with fewer procedures than an anatomy-based strategy.
Resumo:
AIMS: We studied the respective added value of the quantitative myocardial blood flow (MBF) and the myocardial flow reserve (MFR) as assessed with (82)Rb positron emission tomography (PET)/CT in predicting major adverse cardiovascular events (MACEs) in patients with suspected myocardial ischaemia. METHODS AND RESULTS: Myocardial perfusion images were analysed semi-quantitatively (SDS, summed difference score) and quantitatively (MBF, MFR) in 351 patients. Follow-up was completed in 335 patients and annualized MACE (cardiac death, myocardial infarction, revascularization, or hospitalization for congestive heart failure or de novo stable angor) rates were analysed with the Kaplan-Meier method in 318 patients after excluding 17 patients with early revascularizations (<60 days). Independent predictors of MACEs were identified by multivariate analysis. During a median follow-up of 624 days (inter-quartile range 540-697), 35 MACEs occurred. An annualized MACE rate was higher in patients with ischaemia (SDS >2) (n = 105) than those without [14% (95% CI = 9.1-22%) vs. 4.5% (2.7-7.4%), P < 0.0001]. The lowest MFR tertile group (MFR <1.8) had the highest MACE rate [16% (11-25%) vs. 2.9% (1.2-7.0%) and 4.3% (2.1-9.0%), P < 0.0001]. Similarly, the lowest stress MBF tertile group (MBF <1.8 mL/min/g) had the highest MACE rate [14% (9.2-22%) vs. 7.3% (4.2-13%) and 1.8% (0.6-5.5%), P = 0.0005]. Quantitation with stress MBF or MFR had a significant independent prognostic power in addition to semi-quantitative findings. The largest added value was conferred by combining stress MBF to SDS. This holds true even for patients without ischaemia. CONCLUSION: Perfusion findings in (82)Rb PET/CT are strong MACE outcome predictors. MBF quantification has an added value allowing further risk stratification in patients with normal and abnormal perfusion images.
Resumo:
OBJECTIVE: Overanticoagulated medical inpatients may be particularly prone to bleeding complications. Among medical inpatients with excessive oral anticoagulation (AC), we sought to identify patient and treatment factors associated with bleeding. METHODS: We prospectively identified consecutive patients receiving oral AC admitted to the medical ward of a university hospital (February-July 2006) who had at least one international normalized ratio (INR) value >3.0 during the hospital stay. We recorded patient characteristics, AC-related factors, and concomitant treatments (e.g., platelet inhibitors) that increase the bleeding risk. The outcome was overall bleeding, defined as the occurrence of major or minor bleeding during the hospital stay. We used logistic regression to explore patient and treatment factors associated with bleeding. RESULTS: Overall, 145 inpatients with excessive oral AC comprised our study sample. Atrial fibrillation (59%) and venous thromboembolism (28%) were the most common indications for AC. Twelve patients (8.3%) experienced a bleeding event. Of these, 8 had major bleeding. Women had a somewhat higher risk of major bleeding than men (12.5% vs 4.1%, p = 0.08). Multivariable analysis demonstrated that female gender was independently associated with bleeding (odds ratio [OR] 4.3, 95% confidence interval [95% C1] 1.1-17.8). Age, history of major bleeding, value of the index INR, and concomitant treatment with platelet inhibitors were not independent predictors of bleeding. CONCLUSIONS: We found that hospitalized women experiencing an episode of excessive oral AC have a 4-fold increased risk of bleeding compared with men. Whether overanticoagulated women require more aggressive measures of AC reversal must be examined in further studies.
Resumo:
Background: In haemodynamically stable patients with acute symptomatic pulmonary embolism (PE), studies have not evaluated the usefulness of combining the measurement of cardiac troponin, transthoracic echocardiogram (TTE), and lower extremity complete compression ultrasound (CCUS) testing for predicting the risk of PE-related death. Methods: The study assessed the ability of three diagnostic tests (cardiac troponin I (cTnI), echocardiogram, and CCUS) to prognosticate the primary outcome of PE-related mortality during 30 days of follow-up after a diagnosis of PE by objective testing. Results: Of 591 normotensive patients diagnosed with PE, the primary outcome occurred in 37 patients (6.3%; 95% CI 4.3% to 8.2%). Patients with right ventricular dysfunction (RVD) by TTE and concomitant deep vein thrombosis (DVT) by CCUS had a PE-related mortality of 19.6%, compared with 17.1% of patients with elevated cTnI and concomitant DVT and 15.2% of patients with elevated cTnI and RVD. The use of any two-test strategy had a higher specificity and positive predictive value compared with the use of any test by itself. A combined three-test strategy did not further improve prognostication. For a subgroup analysis of high-risk patients, according to the pulmonary embolism severity index (classes IV and V), positive predictive values of the two-test strategies for PE-related mortality were 25.0%, 24.4% and 20.7%, respectively. Conclusions: In haemodynamically stable patients with acute symptomatic PE, a combination of echocardiography (or troponin testing) and CCUS improved prognostication compared with the use of any test by itself for the identification of those at high risk of PE-related death.
Resumo:
The value of driving We as Americans - and especially as Iowans - value the independence of getting around in our own vehicles and staying connected with our families and communities. The majority of older Iowans enjoy a more active, healthy and longer life than previous generations. Freedom of mobility shapes our quality of life. With aging, driving becomes an increasing concern for older Iowans and their families. How we deal with changes in our driving ability and, eventually, choose when and how to retire from driving, will affect our safety and our quality of life.
Resumo:
The choice to adopt risk-sensitive measurement approaches for operational risks: the case of Advanced Measurement Approach under Basel II New Capital Accord This paper investigates the choice of the operational risk approach under Basel II requirements and whether the adoption of advanced risk measurement approaches allows banks to save capital. Among the three possible approaches for operational risk measurement, the Advanced Measurement Approach (AMA) is the most sophisticated and requires the use of historical loss data, the application of statistical tools, and the engagement of a highly qualified staff. Our results provide evidence that the adoption of AMA is contingent on the availability of bank resources and prior experience in risk-sensitive operational risk measurement practices. Moreover, banks that choose AMA exhibit low requirements for capital and, as a result might gain a competitive advantage compared to banks that opt for less sophisticated approaches. - Internal Risk Controls and their Impact on Bank Solvency Recent cases in financial sector showed the importance of risk management controls on risk taking and firm performance. Despite advances in the design and implementation of risk management mechanisms, there is little research on their impact on behavior and performance of firms. Based on data from a sample of 88 banks covering the period between 2004 and 2010, we provide evidence that internal risk controls impact the solvency of banks. In addition, our results show that the level of internal risk controls leads to a higher degree of solvency in banks with a major shareholder in contrast to widely-held banks. However, the relationship between internal risk controls and bank solvency is negatively affected by BHC growth strategies and external restrictions on bank activities, while the higher regulatory requirements for bank capital moderates positively this relationship. - The Impact of the Sophistication of Risk Measurement Approaches under Basel II on Bank Holding Companies Value Previous research showed the importance of external regulation on banks' behavior. Some inefficient standards may accentuate risk-taking in banks and provoke a financial crisis. Despite the growing literature on the potential effects of Basel II rules, there is little empirical research on the efficiency of risk-sensitive capital measurement approaches and their impact on bank profitability and market valuation. Based on data from a sample of 66 banks covering the period between 2008 and 2010, we provide evidence that prudential ratios computed under Basel II standards predict the value of banks. However, this relation is contingent on the degree of sophistication of risk measurement approaches that banks apply. Capital ratios are effective in predicting bank market valuation when banks adopt the advanced approaches to compute the value of their risk-weighted assets.
Resumo:
A high heart rate (HR) predicts future cardiovascular events. We explored the predictive value of HR in patients with high-risk hypertension and examined whether blood pressure reduction modifies this association. The participants were 15,193 patients with hypertension enrolled in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial and followed up for 5 years. The HR was assessed from electrocardiographic recordings obtained annually throughout the study period. The primary end point was the interval to cardiac events. After adjustment for confounders, the hazard ratio of the composite cardiac primary end point for a 10-beats/min of the baseline HR increment was 1.16 (95% confidence interval 1.12 to 1.20). Compared to the lowest HR quintile, the adjusted hazard ratio in the highest quintile was 1.73 (95% confidence interval 1.46 to 2.04). Compared to the pooled lower quintiles of baseline HR, the annual incidence of primary end point in the top baseline quintile was greater in each of the 5 study years (all p <0.05). The adjusted hazard ratio for the primary end point in the highest in-trial HR heart rate quintile versus the lowest quintile was 1.53 (95% confidence interval 1.26 to 1.85). The incidence of primary end points in the highest in-trial HR group compared to the pooled 4 lower quintiles was 53% greater in patients with well-controlled blood pressure (p <0.001) and 34% greater in those with uncontrolled blood pressure (p = 0.002). In conclusion, an increased HR is a long-term predictor of cardiovascular events in patients with high-risk hypertension. This effect was not modified by good blood pressure control. It is not yet known whether a therapeutic reduction of HR would improve cardiovascular prognosis.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
The State of Santa Catarina, Brazil, has agricultural and livestock activities, such as pig farming, that are responsible for adding large amounts of phosphorus (P) to soils. However, a method is required to evaluate the environmental risk of these high soil P levels. One possible method for evaluating the environmental risk of P fertilization, whether organic or mineral, is to establish threshold levels of soil available P, measured by Mehlich-1 extractions, below which there is not a high risk of P transfer from the soil to surface waters. However, the Mehlich-1 extractant is sensitive to soil clay content, and that factor should be considered when establishing such P-thresholds. The objective of this study was to determine P-thresholds using the Mehlich-1 extractant for soils with different clay contents in the State of Santa Catarina, Brazil. Soil from the B-horizon of an Oxisol with 800 g kg-1 clay was mixed with different amounts of sand to prepare artificial soils with 200, 400, 600, and 800 g kg-1 clay. The artificial soils were incubated for 30 days with moisture content at 80 % of field capacity to stabilize their physicochemical properties, followed by additional incubation for 30 days after liming to raise the pH(H2O) to 6.0. Soil P sorption curves were produced, and the maximum sorption (Pmax) was determined using the Langmuir model for each soil texture evaluated. Based on the Pmax values, seven rates of P were added to four replicates of each soil, and incubated for 20 days more. Following incubation, available P contents (P-Mehlich-1) and P dissolved in the soil solution (P-water) were determined. A change-point value (the P-Mehlich-1 value above which P-water starts increasing sharply) was calculated through the use of segmented equations. The maximum level of P that a soil might safely adsorb (P-threshold) was defined as 80 % of the change-point value to maintain a margin for environmental safety. The P-threshold value, in mg dm-3, was dependent on the soil clay content according to the model P-threshold = 40 + Clay, where the soil clay content is expressed as a percentage. The model was tested in 82 diverse soil samples from the State of Santa Catarina and was able to distinguish samples with high and low environmental risk.
Resumo:
PURPOSE: To derive a prediction rule by using prospectively obtained clinical and bone ultrasonographic (US) data to identify elderly women at risk for osteoporotic fractures. MATERIALS AND METHODS: The study was approved by the Swiss Ethics Committee. A prediction rule was computed by using data from a 3-year prospective multicenter study to assess the predictive value of heel-bone quantitative US in 6174 Swiss women aged 70-85 years. A quantitative US device to calculate the stiffness index at the heel was used. Baseline characteristics, known risk factors for osteoporosis and fall, and the quantitative US stiffness index were used to elaborate a predictive rule for osteoporotic fracture. Predictive values were determined by using a univariate Cox model and were adjusted with multivariate analysis. RESULTS: There were five risk factors for the incidence of osteoporotic fracture: older age (>75 years) (P < .001), low heel quantitative US stiffness index (<78%) (P < .001), history of fracture (P = .001), recent fall (P = .001), and a failed chair test (P = .029). The score points assigned to these risk factors were as follows: age, 2 (3 if age > 80 years); low quantitative US stiffness index, 5 (7.5 if stiffness index < 60%); history of fracture, 1; recent fall, 1.5; and failed chair test, 1. The cutoff value to obtain a high sensitivity (90%) was 4.5. With this cutoff, 1464 women were at lower risk (score, <4.5) and 4710 were at higher risk (score, >or=4.5) for fracture. Among the higher-risk women, 6.1% had an osteoporotic fracture, versus 1.8% of women at lower risk. Among the women who had a hip fracture, 90% were in the higher-risk group. CONCLUSION: A prediction rule obtained by using quantitative US stiffness index and four clinical risk factors helped discriminate, with high sensitivity, women at higher versus those at lower risk for osteoporotic fracture.
Resumo:
The Simpson-Golabi-Behmel syndrome type 1 (SGBS1, OMIM #312870) is an X-linked overgrowth condition comprising abnormal facial appearance, supernumerary nipples, congenital heart defects, polydactyly, fingernail hypoplasia, increased risk of neonatal death and of neoplasia. It is caused by mutation/deletion of the GPC3 gene. We describe a macrosomic 27-week preterm newborn with SGBS1 who presents a novel GPC3 mutation and emphasize the phenotypic aspects which allow a correct diagnosis neonatally in particular the rib malformations, hypoplasia of index finger and of the same fingernail, and 2nd-3rd finger syndactyly.