955 resultados para Multivariate risk model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Population-based cohort studies of risk factors of stroke are scarce in developing countries and none has been done in the African region. We conducted a longitudinal study in the Seychelles (Indian Ocean, east of Kenya), a middle-income island state where the majority of the population is of African descent. Such data in Africa are important for international comparison and for advocacy in the region. Methods: Three examination surveys of cardiovascular risk factors were performed in independent samples representative of the general population aged 25-64 in 1989, 1994 and 2004 (n=1081, 1067, and 1255, respectively). Baseline risk factors data were linked with cause-specific mortality from vital statistics up to May 2007 (all deaths are medically certified in the Seychelles and kept in an electronic database). We considered stroke (any type) as a cause of death if the diagnosis was reported in any of the 4 fields in the death certificates for underlying and concomitant causes of death. Results. Among the 2479 persons aged 35-64 at baseline, 280 died including 56 with stroke during follow up (maximum: 18.2 years; mean: 10.2 years). In this age range, age-adjusted mortality rates (/100'000/year) were 969 for all cause and 187 for stroke; age-adjusted prevalence of high blood pressure (≥140/90 mmHg) was 48%. In multivariate Cox survival time regression, stroke mortality was increased by 18% and 35% for a 10-mmHg increase in systolic, respectively diastolic BP (p<0.001). Stroke mortality was also associated with age, smoking ≥5 cigarettes vs. no smoking (HR: 2.4; 95% CI: 1.2-4.8) and diabetes (HR: 1.9; 1.02-3.6) but not with sex, LDL-cholesterol intake, alcohol intake and professional occupation. Conclusion. This first population-based cohort study in the African region demonstrates high mortality rates from stroke in middle-aged adults and confirms associations with high BP and other risk factors. This emphasizes the importance of reducing BP and other modifiable risk factors in high risk individuals and in the general population as a main strategy to reduce the burden of stroke.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este artículo, a partir de la inversa de la matriz de varianzas y covarianzas se obtiene el modelo Esperanza-Varianza de Markowitz siguiendo un camino más corto y matemáticamente riguroso. También se obtiene la ecuación de equilibrio del CAPM de Sharpe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background- An elevated resting heart rate is associated with rehospitalization for heart failure and is a modifiable risk factor in heart failure patients. We aimed to examine the association between resting heart rate and incident heart failure in a population-based cohort study of healthy adults without pre-existing overt heart disease. Methods and Results- We studied 4768 men and women aged ≥55 years from the population-based Rotterdam Study. We excluded participants with prevalent heart failure, coronary heart disease, pacemaker, atrial fibrillation, atrioventricular block, and those using β-blockers or calcium channel blockers. We used extended Cox models allowing for time-dependent variation of resting heart rate along follow-up. During a median of 14.6 years of follow-up, 656 participants developed heart failure. The risk of heart failure was higher in men with higher resting heart rate. For each increment of 10 beats per minute, the multivariable adjusted hazard ratios in men were 1.16 (95% confidence interval, 1.05-1.28; P=0.005) in the time-fixed heart rate model and 1.13 (95% confidence interval, 1.02-1.25; P=0.017) in the time-dependent heart rate model. The association could not be demonstrated in women (P for interaction=0.004). Censoring participants for incident coronary heart disease or using time-dependent models to account for the use of β-blockers or calcium channel blockers during follow-up did not alter the results. Conclusions- Baseline or persistent higher resting heart rate is an independent risk factor for the development of heart failure in healthy older men in the general population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the association between diet and head and neck cancer (HNC) risk using data from the International Head and Neck Cancer Epidemiology (INHANCE) consortium. The INHANCE pooled data included 22 case-control studies with 14,520 cases and 22,737 controls. Center-specific quartiles among the controls were used for food groups, and frequencies per week were used for single food items. A dietary pattern score combining high fruit and vegetable intake and low red meat intake was created. Odds ratios (OR) and 95% confidence intervals (CI) for the dietary items on the risk of HNC were estimated with a two-stage random-effects logistic regression model. An inverse association was observed for higher-frequency intake of fruit (4th vs. 1st quartile OR = 0.52, 95% CI = 0.43-0.62, p (trend) < 0.01) and vegetables (OR = 0.66, 95% CI = 0.49-0.90, p (trend) = 0.01). Intake of red meat (OR = 1.40, 95% CI = 1.13-1.74, p p (trend) < 0.01) was positively associated with HNC risk. Higher dietary pattern scores, reflecting high fruit/vegetable and low red meat intake, were associated with reduced HNC risk (per score increment OR = 0.90, 95% CI = 0.84-0.97).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In haemodynamically stable patients with acute symptomatic pulmonary embolism (PE), studies have not evaluated the usefulness of combining the measurement of cardiac troponin, transthoracic echocardiogram (TTE), and lower extremity complete compression ultrasound (CCUS) testing for predicting the risk of PE-related death. Methods: The study assessed the ability of three diagnostic tests (cardiac troponin I (cTnI), echocardiogram, and CCUS) to prognosticate the primary outcome of PE-related mortality during 30 days of follow-up after a diagnosis of PE by objective testing. Results: Of 591 normotensive patients diagnosed with PE, the primary outcome occurred in 37 patients (6.3%; 95% CI 4.3% to 8.2%). Patients with right ventricular dysfunction (RVD) by TTE and concomitant deep vein thrombosis (DVT) by CCUS had a PE-related mortality of 19.6%, compared with 17.1% of patients with elevated cTnI and concomitant DVT and 15.2% of patients with elevated cTnI and RVD. The use of any two-test strategy had a higher specificity and positive predictive value compared with the use of any test by itself. A combined three-test strategy did not further improve prognostication. For a subgroup analysis of high-risk patients, according to the pulmonary embolism severity index (classes IV and V), positive predictive values of the two-test strategies for PE-related mortality were 25.0%, 24.4% and 20.7%, respectively. Conclusions: In haemodynamically stable patients with acute symptomatic PE, a combination of echocardiography (or troponin testing) and CCUS improved prognostication compared with the use of any test by itself for the identification of those at high risk of PE-related death.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Only limited data is available on the relationship between family history of laryngeal and other neoplasms and laryngeal cancer risk. We investigated the issue using data from a multicentre case-control study conducted in Italy and Switzerland between 1992 and 2009 including 852 cases with histologically confirmed laryngeal cancer and 1970 controls admitted to hospital for acute, non neoplastic conditions. Unconditional logistic regression models adjusted for age, sex, study center, education, tobacco smoking, alcohol drinking and number of siblings were used to estimate the odds ratios (ORs) of laryngeal cancer. The multivariate OR was 2.8 (95% confidence interval [CI], 1.5-5.3) in subjects reporting a first-degree relative with laryngeal cancer, as compared to subjects with no family history. The OR was higher when the relative was diagnosed before 60 years of age (OR = 3.5, 95% CI 1.4-8.8). As compared to subjects without family history, non-smokers, and moderate drinkers, the OR was 37.1 (95% CI 9.9-139.4) for current smokers, heavy drinkers, with family history of laryngeal cancer. Family history of colorectal (OR = 1.5, 95% CI 1.0-2.3) and kidney (OR = 3.8, 95% CI 1.2-12.1) cancer were also associated to an increased risk of laryngeal cancer, while no significant increase in risk was found for family history of cancer at all sites, excluding the larynx (OR = 1.1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Invasive fungal infection (IFI) is associated with high mortality after heart transplantation (HTx). After two undiagnosed fatal cases of early disseminated fungal infections in our heart transplant program, a retrospective analysis was conducted to identify risk factors for the development of IFI and implement a new antifungal prophylaxis policy. METHODS: Clinical characteristics of HTx recipients hospitalized in our center (2004-2010) were recorded (Period 1), and risk factors associated with IFI were investigated using Cox regression analysis. From October 2010 to October 2012 (Period 2), targeted caspofungin prophylaxis was administered to all recipients at high risk for IFI, based on the results of the Period 1 analysis. RESULTS: During Period 1, 10% (6/59) of the patients developed IFI at a median onset of 9 days after transplantation. By multivariate analysis, the use of posttransplant extracorporeal membrane oxygenation (ECMO) was the strongest predictor for fungal infection (OR, 29.93; 95% CI, 1.51-592.57, P=0.03), whereas renal replacement therapy (RRT) and Aspergillus colonization were significant predictors only by univariate analysis. During Period 2, only 4% (1/26) of the patients developed IFI. In patients at high risk for IFI, antifungal prophylaxis was administered to 17% (4/23) in Period 1 versus 100% (13/13) in Period 2 (P<0.01). By survival analysis, antifungal prophylaxis was associated with a reduction in 90-day IFI incidence (HR, 0.14; 95% CI, 0.03-0.84, P=0.03) and 30-day mortality (HR, 0.25; 95% CI, 0.09-0.8, P=0.02). CONCLUSION: Extracorporeal membrane oxygenation was identified an important risk factor for IFI after HTx, and its use may require targeted administration of antifungal prophylaxis in the immediate posttransplant period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This thesis presents three empirical studies in the field of health insurance in Switzerland. First we investigate the link between health insurance coverage and health care expenditures. We use claims data for over 60 000 adult individuals covered by a major Swiss Health Insurance Fund, followed for four years; the data show a strong positive correlation between coverage and expenditures. Two methods are developed and estimated in order to separate selection effects (due to individual choice of coverage) and incentive effects ("ex post moral hazard"). The first method uses the comparison between inpatient and outpatient expenditures to identify both effects and we conclude that both selection and incentive effects are significantly present in our data. The second method is based on a structural model of joint demand of health care and health insurance and makes the most of the change in the marginal cost of health care to identify selection and incentive effects. We conclude that the correlation between insurance coverage and health care expenditures may be decomposed into the two effects: 75% may be attributed to selection, and 25 % to incentive effects. Moreover, we estimate that a decrease in the coinsurance rate from 100% to 10% increases the marginal demand for health care by about 90% and from 100% to 0% by about 150%. Secondly, having shown that selection and incentive effects exist in the Swiss health insurance market, we present the consequence of this result in the context of risk adjustment. We show that if individuals choose their insurance coverage in function of their health status (selection effect), the optimal compensations should be function of the se- lection and incentive effects. Therefore, a risk adjustment mechanism which ignores these effects, as it is the case presently in Switzerland, will miss his main goal to eliminate incentives for sickness funds to select risks. Using a simplified model, we show that the optimal compensations have to take into account the distribution of risks through the insurance plans in case of self-selection in order to avoid incentives to select risks.Then, we apply our propositions to Swiss data and propose a simple econometric procedure to control for self-selection in the estimation of the risk adjustment formula in order to compute the optimal compensations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The State of Santa Catarina, Brazil, has agricultural and livestock activities, such as pig farming, that are responsible for adding large amounts of phosphorus (P) to soils. However, a method is required to evaluate the environmental risk of these high soil P levels. One possible method for evaluating the environmental risk of P fertilization, whether organic or mineral, is to establish threshold levels of soil available P, measured by Mehlich-1 extractions, below which there is not a high risk of P transfer from the soil to surface waters. However, the Mehlich-1 extractant is sensitive to soil clay content, and that factor should be considered when establishing such P-thresholds. The objective of this study was to determine P-thresholds using the Mehlich-1 extractant for soils with different clay contents in the State of Santa Catarina, Brazil. Soil from the B-horizon of an Oxisol with 800 g kg-1 clay was mixed with different amounts of sand to prepare artificial soils with 200, 400, 600, and 800 g kg-1 clay. The artificial soils were incubated for 30 days with moisture content at 80 % of field capacity to stabilize their physicochemical properties, followed by additional incubation for 30 days after liming to raise the pH(H2O) to 6.0. Soil P sorption curves were produced, and the maximum sorption (Pmax) was determined using the Langmuir model for each soil texture evaluated. Based on the Pmax values, seven rates of P were added to four replicates of each soil, and incubated for 20 days more. Following incubation, available P contents (P-Mehlich-1) and P dissolved in the soil solution (P-water) were determined. A change-point value (the P-Mehlich-1 value above which P-water starts increasing sharply) was calculated through the use of segmented equations. The maximum level of P that a soil might safely adsorb (P-threshold) was defined as 80 % of the change-point value to maintain a margin for environmental safety. The P-threshold value, in mg dm-3, was dependent on the soil clay content according to the model P-threshold = 40 + Clay, where the soil clay content is expressed as a percentage. The model was tested in 82 diverse soil samples from the State of Santa Catarina and was able to distinguish samples with high and low environmental risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The risk of falls is the most commonly cited reason for not providing oral anticoagulation, although the risk of bleeding associated with falls on oral anticoagulants is still debated. We aimed to evaluate whether patients on oral anticoagulation with high falls risk have an increased risk of major bleeding. METHODS: We prospectively studied consecutive adult medical patients who were discharged on oral anticoagulants. The outcome was the time to a first major bleed within a 12-month follow-up period adjusted for age, sex, alcohol abuse, number of drugs, concomitant treatment with antiplatelet agents, and history of stroke or transient ischemic attack. RESULTS: Among the 515 enrolled patients, 35 patients had a first major bleed during follow-up (incidence rate: 7.5 per 100 patient-years). Overall, 308 patients (59.8%) were at high risk of falls, and these patients had a nonsignificantly higher crude incidence rate of major bleeding than patients at low risk of falls (8.0 vs 6.8 per 100 patient-years, P=.64). In multivariate analysis, a high falls risk was not statistically significantly associated with the risk of a major bleed (hazard ratio 1.09; 95% confidence interval, 0.54-2.21). Overall, only 3 major bleeds occurred directly after a fall (incidence rate: 0.6 per 100 patient-years). CONCLUSIONS: In this prospective cohort, patients on oral anticoagulants at high risk of falls did not have a significantly increased risk of major bleeds. These findings suggest that being at risk of falls is not a valid reason to avoid oral anticoagulants in medical patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article investigates the allocation of demand risk within an incomplete contract framework. We consider an incomplete contractual relationship between a public authority and a private provider (i.e. a public-private partnership), in which the latter invests in non-verifiable cost-reducing efforts and the former invests in non-verifiable adaptation efforts to respond to changing consumer demand over time. We show that the party that bears the demand risk has fewer hold-up opportunities and that this leads the other contracting party to make more effort. Thus, in our model, bearing less risk can lead to more effort, which we describe as a new example of âeuro~counter-incentivesâeuro?. We further show that when the benefits of adaptation are important, it is socially preferable to design a contract in which the demand risk remains with the private provider, whereas when the benefits of cost-reducing efforts are important, it is socially preferable to place the demand risk on the public authority. We then apply these results to explain two well-known case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: there is little information regarding the health status of migrants compared to subjects who remained in their country of origin. The aim was to compare Portuguese living in Porto (Portugal) with Portuguese migrants living in Lausanne (Switzerland). Design: cross-sectional studies conducted in Porto (EpiPorto, n=1150) and Lausanne (CoLaus, n=388) among Portuguese subjects aged between 35 and 65 years. Methods: body mass index, blood pressure, cholesterol and glucose levels were assessed using standardized procedures. Educational level, antihypertensive, hypocholesterolemic and antidiabetic treatments were collected using questionnaires. Results: Portuguese living in Lausanne were younger, more frequently male and had a lower education than Portuguese living in Porto. After multivariate adjustment, Portuguese living in Porto had a higher likelihood of being obese [Odds ratio and 95% confidence interval: 1.40 (1.01-1.94)] or abdominal obese [OR: 1.40 (1.02-1.93)] than Portuguese living in Lausanne. Portuguese living in Porto had a higher likelihood of being hypertensive than Portuguese living in Lausanne [OR: 1.38 (1.01-1.90)], while no differences were found regarding hypertension management and control. Portuguese living in Porto had a higher likelihood of being hypercholesterolemic [OR: 1.40 (1.06-1.85)] and were less likely to be treated [OR: 0.47 (0.27-0.83)] and controlled [OR: 0.47 (0.27-0.83)] than Portuguese living in Lausanne. Finally, no differences were found regarding smoking, prevalence and management of diabetes. Conclusion: Portuguese living in Lausanne, Switzerland, present a better cardiovascular risk profile and tend to be better managed regarding their cardiovascular risk factors than Portuguese living in Porto, Portugal.