950 resultados para Value-at-Risk (VaR)
Resumo:
AIMS: We studied the respective added value of the quantitative myocardial blood flow (MBF) and the myocardial flow reserve (MFR) as assessed with (82)Rb positron emission tomography (PET)/CT in predicting major adverse cardiovascular events (MACEs) in patients with suspected myocardial ischaemia. METHODS AND RESULTS: Myocardial perfusion images were analysed semi-quantitatively (SDS, summed difference score) and quantitatively (MBF, MFR) in 351 patients. Follow-up was completed in 335 patients and annualized MACE (cardiac death, myocardial infarction, revascularization, or hospitalization for congestive heart failure or de novo stable angor) rates were analysed with the Kaplan-Meier method in 318 patients after excluding 17 patients with early revascularizations (<60 days). Independent predictors of MACEs were identified by multivariate analysis. During a median follow-up of 624 days (inter-quartile range 540-697), 35 MACEs occurred. An annualized MACE rate was higher in patients with ischaemia (SDS >2) (n = 105) than those without [14% (95% CI = 9.1-22%) vs. 4.5% (2.7-7.4%), P < 0.0001]. The lowest MFR tertile group (MFR <1.8) had the highest MACE rate [16% (11-25%) vs. 2.9% (1.2-7.0%) and 4.3% (2.1-9.0%), P < 0.0001]. Similarly, the lowest stress MBF tertile group (MBF <1.8 mL/min/g) had the highest MACE rate [14% (9.2-22%) vs. 7.3% (4.2-13%) and 1.8% (0.6-5.5%), P = 0.0005]. Quantitation with stress MBF or MFR had a significant independent prognostic power in addition to semi-quantitative findings. The largest added value was conferred by combining stress MBF to SDS. This holds true even for patients without ischaemia. CONCLUSION: Perfusion findings in (82)Rb PET/CT are strong MACE outcome predictors. MBF quantification has an added value allowing further risk stratification in patients with normal and abnormal perfusion images.
Resumo:
OBJECTIVE: Overanticoagulated medical inpatients may be particularly prone to bleeding complications. Among medical inpatients with excessive oral anticoagulation (AC), we sought to identify patient and treatment factors associated with bleeding. METHODS: We prospectively identified consecutive patients receiving oral AC admitted to the medical ward of a university hospital (February-July 2006) who had at least one international normalized ratio (INR) value >3.0 during the hospital stay. We recorded patient characteristics, AC-related factors, and concomitant treatments (e.g., platelet inhibitors) that increase the bleeding risk. The outcome was overall bleeding, defined as the occurrence of major or minor bleeding during the hospital stay. We used logistic regression to explore patient and treatment factors associated with bleeding. RESULTS: Overall, 145 inpatients with excessive oral AC comprised our study sample. Atrial fibrillation (59%) and venous thromboembolism (28%) were the most common indications for AC. Twelve patients (8.3%) experienced a bleeding event. Of these, 8 had major bleeding. Women had a somewhat higher risk of major bleeding than men (12.5% vs 4.1%, p = 0.08). Multivariable analysis demonstrated that female gender was independently associated with bleeding (odds ratio [OR] 4.3, 95% confidence interval [95% C1] 1.1-17.8). Age, history of major bleeding, value of the index INR, and concomitant treatment with platelet inhibitors were not independent predictors of bleeding. CONCLUSIONS: We found that hospitalized women experiencing an episode of excessive oral AC have a 4-fold increased risk of bleeding compared with men. Whether overanticoagulated women require more aggressive measures of AC reversal must be examined in further studies.
Resumo:
Background: In haemodynamically stable patients with acute symptomatic pulmonary embolism (PE), studies have not evaluated the usefulness of combining the measurement of cardiac troponin, transthoracic echocardiogram (TTE), and lower extremity complete compression ultrasound (CCUS) testing for predicting the risk of PE-related death. Methods: The study assessed the ability of three diagnostic tests (cardiac troponin I (cTnI), echocardiogram, and CCUS) to prognosticate the primary outcome of PE-related mortality during 30 days of follow-up after a diagnosis of PE by objective testing. Results: Of 591 normotensive patients diagnosed with PE, the primary outcome occurred in 37 patients (6.3%; 95% CI 4.3% to 8.2%). Patients with right ventricular dysfunction (RVD) by TTE and concomitant deep vein thrombosis (DVT) by CCUS had a PE-related mortality of 19.6%, compared with 17.1% of patients with elevated cTnI and concomitant DVT and 15.2% of patients with elevated cTnI and RVD. The use of any two-test strategy had a higher specificity and positive predictive value compared with the use of any test by itself. A combined three-test strategy did not further improve prognostication. For a subgroup analysis of high-risk patients, according to the pulmonary embolism severity index (classes IV and V), positive predictive values of the two-test strategies for PE-related mortality were 25.0%, 24.4% and 20.7%, respectively. Conclusions: In haemodynamically stable patients with acute symptomatic PE, a combination of echocardiography (or troponin testing) and CCUS improved prognostication compared with the use of any test by itself for the identification of those at high risk of PE-related death.
Resumo:
The value of driving We as Americans - and especially as Iowans - value the independence of getting around in our own vehicles and staying connected with our families and communities. The majority of older Iowans enjoy a more active, healthy and longer life than previous generations. Freedom of mobility shapes our quality of life. With aging, driving becomes an increasing concern for older Iowans and their families. How we deal with changes in our driving ability and, eventually, choose when and how to retire from driving, will affect our safety and our quality of life.
Resumo:
The choice to adopt risk-sensitive measurement approaches for operational risks: the case of Advanced Measurement Approach under Basel II New Capital Accord This paper investigates the choice of the operational risk approach under Basel II requirements and whether the adoption of advanced risk measurement approaches allows banks to save capital. Among the three possible approaches for operational risk measurement, the Advanced Measurement Approach (AMA) is the most sophisticated and requires the use of historical loss data, the application of statistical tools, and the engagement of a highly qualified staff. Our results provide evidence that the adoption of AMA is contingent on the availability of bank resources and prior experience in risk-sensitive operational risk measurement practices. Moreover, banks that choose AMA exhibit low requirements for capital and, as a result might gain a competitive advantage compared to banks that opt for less sophisticated approaches. - Internal Risk Controls and their Impact on Bank Solvency Recent cases in financial sector showed the importance of risk management controls on risk taking and firm performance. Despite advances in the design and implementation of risk management mechanisms, there is little research on their impact on behavior and performance of firms. Based on data from a sample of 88 banks covering the period between 2004 and 2010, we provide evidence that internal risk controls impact the solvency of banks. In addition, our results show that the level of internal risk controls leads to a higher degree of solvency in banks with a major shareholder in contrast to widely-held banks. However, the relationship between internal risk controls and bank solvency is negatively affected by BHC growth strategies and external restrictions on bank activities, while the higher regulatory requirements for bank capital moderates positively this relationship. - The Impact of the Sophistication of Risk Measurement Approaches under Basel II on Bank Holding Companies Value Previous research showed the importance of external regulation on banks' behavior. Some inefficient standards may accentuate risk-taking in banks and provoke a financial crisis. Despite the growing literature on the potential effects of Basel II rules, there is little empirical research on the efficiency of risk-sensitive capital measurement approaches and their impact on bank profitability and market valuation. Based on data from a sample of 66 banks covering the period between 2008 and 2010, we provide evidence that prudential ratios computed under Basel II standards predict the value of banks. However, this relation is contingent on the degree of sophistication of risk measurement approaches that banks apply. Capital ratios are effective in predicting bank market valuation when banks adopt the advanced approaches to compute the value of their risk-weighted assets.
Resumo:
A high heart rate (HR) predicts future cardiovascular events. We explored the predictive value of HR in patients with high-risk hypertension and examined whether blood pressure reduction modifies this association. The participants were 15,193 patients with hypertension enrolled in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial and followed up for 5 years. The HR was assessed from electrocardiographic recordings obtained annually throughout the study period. The primary end point was the interval to cardiac events. After adjustment for confounders, the hazard ratio of the composite cardiac primary end point for a 10-beats/min of the baseline HR increment was 1.16 (95% confidence interval 1.12 to 1.20). Compared to the lowest HR quintile, the adjusted hazard ratio in the highest quintile was 1.73 (95% confidence interval 1.46 to 2.04). Compared to the pooled lower quintiles of baseline HR, the annual incidence of primary end point in the top baseline quintile was greater in each of the 5 study years (all p <0.05). The adjusted hazard ratio for the primary end point in the highest in-trial HR heart rate quintile versus the lowest quintile was 1.53 (95% confidence interval 1.26 to 1.85). The incidence of primary end points in the highest in-trial HR group compared to the pooled 4 lower quintiles was 53% greater in patients with well-controlled blood pressure (p <0.001) and 34% greater in those with uncontrolled blood pressure (p = 0.002). In conclusion, an increased HR is a long-term predictor of cardiovascular events in patients with high-risk hypertension. This effect was not modified by good blood pressure control. It is not yet known whether a therapeutic reduction of HR would improve cardiovascular prognosis.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
The State of Santa Catarina, Brazil, has agricultural and livestock activities, such as pig farming, that are responsible for adding large amounts of phosphorus (P) to soils. However, a method is required to evaluate the environmental risk of these high soil P levels. One possible method for evaluating the environmental risk of P fertilization, whether organic or mineral, is to establish threshold levels of soil available P, measured by Mehlich-1 extractions, below which there is not a high risk of P transfer from the soil to surface waters. However, the Mehlich-1 extractant is sensitive to soil clay content, and that factor should be considered when establishing such P-thresholds. The objective of this study was to determine P-thresholds using the Mehlich-1 extractant for soils with different clay contents in the State of Santa Catarina, Brazil. Soil from the B-horizon of an Oxisol with 800 g kg-1 clay was mixed with different amounts of sand to prepare artificial soils with 200, 400, 600, and 800 g kg-1 clay. The artificial soils were incubated for 30 days with moisture content at 80 % of field capacity to stabilize their physicochemical properties, followed by additional incubation for 30 days after liming to raise the pH(H2O) to 6.0. Soil P sorption curves were produced, and the maximum sorption (Pmax) was determined using the Langmuir model for each soil texture evaluated. Based on the Pmax values, seven rates of P were added to four replicates of each soil, and incubated for 20 days more. Following incubation, available P contents (P-Mehlich-1) and P dissolved in the soil solution (P-water) were determined. A change-point value (the P-Mehlich-1 value above which P-water starts increasing sharply) was calculated through the use of segmented equations. The maximum level of P that a soil might safely adsorb (P-threshold) was defined as 80 % of the change-point value to maintain a margin for environmental safety. The P-threshold value, in mg dm-3, was dependent on the soil clay content according to the model P-threshold = 40 + Clay, where the soil clay content is expressed as a percentage. The model was tested in 82 diverse soil samples from the State of Santa Catarina and was able to distinguish samples with high and low environmental risk.
Resumo:
PURPOSE: To derive a prediction rule by using prospectively obtained clinical and bone ultrasonographic (US) data to identify elderly women at risk for osteoporotic fractures. MATERIALS AND METHODS: The study was approved by the Swiss Ethics Committee. A prediction rule was computed by using data from a 3-year prospective multicenter study to assess the predictive value of heel-bone quantitative US in 6174 Swiss women aged 70-85 years. A quantitative US device to calculate the stiffness index at the heel was used. Baseline characteristics, known risk factors for osteoporosis and fall, and the quantitative US stiffness index were used to elaborate a predictive rule for osteoporotic fracture. Predictive values were determined by using a univariate Cox model and were adjusted with multivariate analysis. RESULTS: There were five risk factors for the incidence of osteoporotic fracture: older age (>75 years) (P < .001), low heel quantitative US stiffness index (<78%) (P < .001), history of fracture (P = .001), recent fall (P = .001), and a failed chair test (P = .029). The score points assigned to these risk factors were as follows: age, 2 (3 if age > 80 years); low quantitative US stiffness index, 5 (7.5 if stiffness index < 60%); history of fracture, 1; recent fall, 1.5; and failed chair test, 1. The cutoff value to obtain a high sensitivity (90%) was 4.5. With this cutoff, 1464 women were at lower risk (score, <4.5) and 4710 were at higher risk (score, >or=4.5) for fracture. Among the higher-risk women, 6.1% had an osteoporotic fracture, versus 1.8% of women at lower risk. Among the women who had a hip fracture, 90% were in the higher-risk group. CONCLUSION: A prediction rule obtained by using quantitative US stiffness index and four clinical risk factors helped discriminate, with high sensitivity, women at higher versus those at lower risk for osteoporotic fracture.
Resumo:
The Simpson-Golabi-Behmel syndrome type 1 (SGBS1, OMIM #312870) is an X-linked overgrowth condition comprising abnormal facial appearance, supernumerary nipples, congenital heart defects, polydactyly, fingernail hypoplasia, increased risk of neonatal death and of neoplasia. It is caused by mutation/deletion of the GPC3 gene. We describe a macrosomic 27-week preterm newborn with SGBS1 who presents a novel GPC3 mutation and emphasize the phenotypic aspects which allow a correct diagnosis neonatally in particular the rib malformations, hypoplasia of index finger and of the same fingernail, and 2nd-3rd finger syndactyly.
Resumo:
We assessed whether fasting modifies the prognostic value of these measurements for the risk of myocardial infarction (MI). Analyses used mixed effect models and Poisson regression. After confounders were controlled for, fasting triglyceride levels were, on average, 0.122 mmol/L lower than nonfasting levels. Each 2-fold increase in the latest triglyceride level was associated with a 38% increase in MI risk (relative rate, 1.38; 95% confidence interval, 1.26-1.51); fasting status did not modify this association. Our results suggest that it may not be necessary to restrict analyses to fasting measurements when considering MI risk.
Resumo:
Metabolic syndrome represents a grouping of risk factors closely linked to cardiovascular diseases and diabetes. At first, nuclear medicine has no direct application in cardiology at the level of primary prevention, but positron emission tomography is a non invasive imaging technique that can assess myocardial perfusion as well as the endothelium-dependent coronary vasomotion--a surrogate marker of cardiovascular event rate--thus finding an application in studying coronary physiopathology. As the prevalence of the metabolic syndrome is still unknown in Switzerland, we will estimate it from data available in the frame of a health promotion program. Based on the deleterious effect on the endothelium already observed with two components, we will estimate the number of persons at risk in Switzerland.
Resumo:
The pursuit of high response rates to minimise the threat of nonresponse bias continues to dominate decisions about resource allocation in survey research. Yet a growing body of research has begun to question this practice. In this study, we use previously unavailable data from a new sampling frame based on population registers to assess the value of different methods designed to increase response rates on the European Social Survey in Switzerland. Using sampling data provides information about both respondents and nonrespondents, making it possible to examine how changes in response rates resulting from the use of different fieldwork methods relate to changes in the composition and representativeness of the responding sample. We compute an R-indicator to assess representativity with respect to the sampling register variables, and find little improvement in the sample composition as response rates increase. We then examine the impact of response rate increases on the risk of nonresponse bias based on Maximal Absolute Bias (MAB), and coefficients of variation between subgroup response rates, alongside the associated costs of different types of fieldwork effort. The results show that increases in response rate help to reduce MAB, while only small but important improvements to sample representativity are gained by varying the type of effort. These findings lend further support to research that has called into question the value of extensive investment in procedures aimed at reaching response rate targets and the need for more tailored fieldwork strategies aimed both at reducing survey costs and minimising the risk of bias.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
BACKGROUND: The Mediterranean diet has a beneficial role on various neoplasms, but data are scanty on oral cavity and pharyngeal (OCP) cancer. METHODS: We analysed data from a case-control study carried out between 1997 and 2009 in Italy and Switzerland, including 768 incident, histologically confirmed OCP cancer cases and 2078 hospital controls. Adherence to the Mediterranean diet was measured using the Mediterranean Diet Score (MDS) based on the major characteristics of the Mediterranean diet, and two other scores, the Mediterranean Dietary Pattern Adherence Index (MDP) and the Mediterranean Adequacy Index (MAI). RESULTS: We estimated the odds ratios (ORs), and the corresponding 95% confidence intervals (CI), for increasing levels of the scores (i.e., increasing adherence) using multiple logistic regression models. We found a reduced risk of OCP cancer for increasing levels of the MDS, the ORs for subjects with six or more MDS components compared with two or less being 0.20 (95% CI 0.14-0.28, P-value for trend <0.0001). The ORs for the highest vs the lowest quintile were 0.20 (95% CI 0.14-0.28) for the MDP score (score 66.2 or more vs less than 57.9), and 0.48 (95% CI 0.33-0.69) for the MAI score (score value 2.1 or more vs value less 0.92), with significant trends of decreasing risk for both scores. The favourable effect of the Mediterranean diet was apparently stronger in younger subjects, in those with a higher level of education, and in ex-smokers, although it was observed in other strata as well. CONCLUSIONS: Our study provides strong evidence of a beneficial role of the Mediterranean diet on OCP cancer.