926 resultados para Elementary Methods In Number Theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The use of n-3 fatty acids may prevent cardiovascular events in patients with recent myocardial infarction or heart failure. Their effects in patients with (or at risk for) type 2 diabetes mellitus are unknown. METHODS: In this double-blind study with a 2-by-2 factorial design, we randomly assigned 12,536 patients who were at high risk for cardiovascular events and had impaired fasting glucose, impaired glucose tolerance, or diabetes to receive a 1-g capsule containing at least 900 mg (90% or more) of ethyl esters of n-3 fatty acids or placebo daily and to receive either insulin glargine or standard care. The primary outcome was death from cardiovascular causes. The results of the comparison between n-3 fatty acids and placebo are reported here. RESULTS: During a median follow up of 6.2 years, the incidence of the primary outcome was not significantly decreased among patients receiving n-3 fatty acids, as compared with those receiving placebo (574 patients [9.1%] vs. 581 patients [9.3%]; hazard ratio, 0.98; 95% confidence interval [CI], 0.87 to 1.10; P=0.72). The use of n-3 fatty acids also had no significant effect on the rates of major vascular events (1034 patients [16.5%] vs. 1017 patients [16.3%]; hazard ratio, 1.01; 95% CI, 0.93 to 1.10; P=0.81), death from any cause (951 [15.1%] vs. 964 [15.4%]; hazard ratio, 0.98; 95% CI, 0.89 to 1.07; P=0.63), or death from arrhythmia (288 [4.6%] vs. 259 [4.1%]; hazard ratio, 1.10; 95% CI, 0.93 to 1.30; P=0.26). Triglyceride levels were reduced by 14.5 mg per deciliter (0.16 mmol per liter) more among patients receiving n-3 fatty acids than among those receiving placebo (P<0.001), without a significant effect on other lipids. Adverse effects were similar in the two groups. CONCLUSIONS: Daily supplementation with 1 g of n-3 fatty acids did not reduce the rate of cardiovascular events in patients at high risk for cardiovascular events. (Funded by Sanofi; ORIGIN ClinicalTrials.gov number, NCT00069784.).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelationbetween variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the report of the first workshop on Incorporating In Vitro Alternative Methods for Developmental Neurotoxicity (DNT) Testing into International Hazard and Risk Assessment Strategies, held in Ispra, Italy, on 19-21 April 2005. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and jointly organized by ECVAM, the European Chemical Industry Council, and the Johns Hopkins University Center for Alternatives to Animal Testing. The primary aim of the workshop was to identify and catalog potential methods that could be used to assess how data from in vitro alternative methods could help to predict and identify DNT hazards. Working groups focused on two different aspects: a) details on the science available in the field of DNT, including discussions on the models available to capture the critical DNT mechanisms and processes, and b) policy and strategy aspects to assess the integration of alternative methods in a regulatory framework. This report summarizes these discussions and details the recommendations and priorities for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction.- Knowledge of predictors of an unfavourable outcome, e.g. non-return to work after an injury enables to identify patients at risk and to target interventions for modifiable predictors. It has been recently shown that INTERMED; a tool to measure biopsychosocial complexity in four domains (biologic, psychologic, social and care, with a score between 0-60 points) can be useful in this context. The aim of this study was to set up a predictive model for non-return to work using INTERMED in patients in vocational rehabilitation after orthopaedic injury.Patients and methods.- In this longitudinal prospective study, the cohort consisted of 2156 consecutively included inpatients with orthopaedic trauma attending a rehabilitation hospital after a work, traffic or sport related injury. Two years after discharge, a questionnaire regarding return to work was sent (1502 returned their questionnaires). In addition to INTERMED, 18 predictors known at baseline of the rehabilitation were selected based on previous research. A multivariable logistic regression was performed.Results.- In the multivariate model, not-returning to work at 2 years was significantly predicted by the INTERMED: odds-ratio (OR) 1.08 (95% confidence interval, CI [1.06; 1.11]) for a one point increase in scale; by qualified work-status before the injury OR = 0.74, CI (0.54; 0.99), by using French as preferred language OR = 0.60, CI (0.45; 0.80), by upper-extremity injury OR = 1.37, CI (1.03; 1.81), by higher education (> 9 years) OR = 0.74, CI (0.55; 1.00), and by a 10 year increase in age OR = 1.15, CI (1.02; 1.29). The area under the receiver-operator-characteristics curve (ROC)-curve was 0.733 for the full model (INTERMED plus 18 variables).Discussion.- These results confirm that the total score of the INTERMED is a significant predictor for return to work. The full model with 18 predictors combined with the total score of INTERMED has good predictive value. However, the number of variables (19) to measure is high for the use as screening tool in a clinic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Following a disaster, up to 50% of mass casualties are children. The number of disaster increases worldwide, including in Switzerland. Following national order, the mapping of the various risks of disaster in Switzerland will be completed by the end of 2012. Pre-hospital disaster drills and plans are well established and regularly tested. In-hospital disaster plans are much less frequently tested, if only available. Pediatric in-hospital full scale disaster exercises have never been reported in Switzerland. Based on our local constraints, we set up and evaluated a disaster plan during two full scale exercises. Methods: In a university hospital treating more than 35 000 pediatric emergencies per year, two exercises involving mock victims of a disaster aged 9 to 14 years old were successively set up over a period of 3 years. The exercises were planned during the day, without modification of the normal emergency room activities. The hospital staff was informed and trained in advance. Variables such as the alarm timing and transmission, triage set-up and function, special disaster medical records utilization, communication and victims' identification were assessed. Family members participated in the second exercise. An evaluation team observed and record exercises activities, identifying strength and weaknesses. Results: On two separate occasions, a total of 44 mock patients participated, were triaged, admitted and treated in the hospital according to usual standards of care. Alarm transmission was not appropriate during the first exercise. Triage overload occurred on one occasion. In-hospital communication needed readjustment. Identification and in-hospital tracking of the children remained problematic. Hospital employees showed great enthusiasm and stressed the positive effect of full scale exercises on their knowledge of the hospital disaster plan. Conclusions: Performing real life disaster exercises in a pediatric hospital was very beneficial. The disaster plan was adapted to local needs and updated accordingly. An alarm transmission protocol was elaborated and tested. Triage set-up was adapted and tested. A hospital identification plan for injured children was created and tested. Full scale hospital exercises evaluating disaster plans revealed several weaknesses in the system. Practice readjustments based on local experience were made. A tested pediatric disaster plan adapted to local constraints could minimize chaos, optimize care and support in the event of a real disaster. Children's identification and family reunification following a disaster remains a challenge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrated approaches using different in vitro methods in combination with bioinformatics can (i) increase the success rate and speed of drug development; (ii) improve the accuracy of toxicological risk assessment; and (iii) increase our understanding of disease. Three-dimensional (3D) cell culture models are important building blocks of this strategy which has emerged during the last years. The majority of these models are organotypic, i.e., they aim to reproduce major functions of an organ or organ system. This implies in many cases that more than one cell type forms the 3D structure, and often matrix elements play an important role. This review summarizes the state of the art concerning commonalities of the different models. For instance, the theory of mass transport/metabolite exchange in 3D systems and the special analytical requirements for test endpoints in organotypic cultures are discussed in detail. In the next part, 3D model systems for selected organs--liver, lung, skin, brain--are presented and characterized in dedicated chapters. Also, 3D approaches to the modeling of tumors are presented and discussed. All chapters give a historical background, illustrate the large variety of approaches, and highlight up- and downsides as well as specific requirements. Moreover, they refer to the application in disease modeling, drug discovery and safety assessment. Finally, consensus recommendations indicate a roadmap for the successful implementation of 3D models in routine screening. It is expected that the use of such models will accelerate progress by reducing error rates and wrong predictions from compound testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Surgical decision making in lumbar spinal stenosis (LSS) takes into account primarily clinical symptoms as well as concordant radiological findings. We hypothesized that a wide variation of operative threshold would be found in particular as far as judgment of severity of radiological stenosis is concerned. Patients and methods: The number of surgeons who would proceed to decompression was studied relative to the perceived severity of radiological stenosis based either on measurements of dural sac cross sectional area (DSCA) or on the recently described morphological grading as seen on axial T2 MRI images. A link to an electronic survey page with a set of ten axial T2 MRI images taken from ten patients with either low back pain or LSS were sent to members of three national or international spine societies. Those 10 images were randomly presented initially and re-shuffled on a second page including this time DSCA measurements in mm2, ranging from 14 to 226 mm2, giving a total of 20 images to appraise. Morphological grades were ranging from grade A to D. Surgeons were asked if they would consider decompression given the radiological appearance of stenosis and that symptoms of neurological claudication were severe in patients who were otherwise fit for surgery. Fisher's exact test was performed following dichotomization of data when appropriate. Results: A total of 142 spine surgeons (113 orthopedic spine surgeons, 29 neurosurgeons) responded from 25 countries. A substantial agreement was observed in operating patients with severe (grade C) or extreme (grade D) stenosis as defined by the morphological grade compared to lesser stenosis (A&B) grades (p<0.0001). Decision to operate was not dependent on number of years in practice, medical density in practicing country or specialty although more neurosurgeons would operate on grade C stenosis (p<0.005). Disclosing the DSCA measurement did not alter the decision to operate. Although 20 surgeons only had prior knowledge of the description of the morphological grading, their responses showed no statistically significant difference with those of the remaining 122 physicians. Conclusions: This study showed that surgeons across borders are less influenced by DSCA in their decision making than by the morphological appearance of the dural sac. Classifying LSS according to morphology rather than surface measurements appears to be consistent with current clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Addressing the risks of nanoparticles requires knowledge about their hazards, which is generated progressively, but also about occupational exposure and liberation into the environment. However, currently such information is not systematically collected, therefore the risk assessment of this exposure or liberation lacks quantitative data. In 2006 a targeted telephone survey among Swiss companies (1) showed the usage of nanoparticles in a few selected companies but did not provide data to extrapolate on the totality of the Swiss workforce. The goal of this study was to evaluate in a representative way the current prevalence and level of nanoparticle usage in Swiss industry, the health, safety and environment measures, and the number of potentially exposed workers. Results A representative, stratified mail survey was conducted among 1,626 clients of the Swiss National Accident Insurance Fund (SUVA). SUVA insures about 80,000 manufacturing firms, which represent 84% of all Swiss manufacturing companies. 947 companies answered the survey (58.3% response rate). Extrapolation to all Swiss manufacturing companies results in 1,309 workers (95%-confidence interval, 1,073 to 1,545) across the Swiss manufacturing sector being potentially exposed to nanoparticles in 586 companies (95%-CI: 145 to 1'027). This corresponds to 0.08% (95%-CI: 0.06% to 0.09%) of all Swiss manufacturing sector workers and to 0.6% (95%-CI: 0.2% to 1.1%) of companies. The industrial chemistry sector showed the highest percentage of companies using nanoparticles (21.2% of those surveyed) and a high percentage of potentially exposed workers (0.5% of workers in these companies), but many other important sectors also reported nanoparticles. Personal protection equipment was the predominant protection strategy. Only a minority applied specific environmental protection measures. Conclusions This is the first representative nationwide study on the prevalence of nanoparticle usage across a manufacturing sector. The information about the number of companies can be used for quantitative risk assessment. Furthermore it can help policy makers designing strategies to support companies in the responsible development of safer nanomaterial use. Noting the low prevalence of nanoparticle usage, there would still seem to be time to introduce necessary protection methods in a proactive and cost effective way in Swiss industry. But if the predicted "nano-revolution" becomes true, now is the time to take action.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the contribution to vacuum decay in field theory due to the interaction between the long- and short-wavelength modes of the field. The field model considered consists of a scalar field of mass M with a cubic term in the potential. The dynamics of the long-wavelength modes becomes diffusive in this interaction. The diffusive behavior is described by the reduced Wigner function that characterizes the state of the long-wavelength modes. This function is obtained from the whole Wigner function by integration of the degrees of freedom of the short-wavelength modes. The dynamical equation for the reduced Wigner function becomes a kind of Fokker-Planck equation which is solved with suitable boundary conditions enforcing an initial metastable vacuum state trapped in the potential well. As a result a finite activation rate is found, even at zero temperature, for the formation of true vacuum bubbles of size M-1. This effect makes a substantial contribution to the total decay rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drug-eluting microspheres are used for embolization of hypervascular tumors and allow for local controlled drug release. Although the drug release from the microspheres relies on fast ion-exchange, so far only slow-releasing in vitro dissolution methods have been correlated to in vivo data. Three in vitro release methods are assessed in this study for their potential to predict slow in vivo release of sunitinib from chemoembolization spheres to the plasma, and fast local in vivo release obtained in an earlier study in rabbits. Release in an orbital shaker was slow (t50%=4.5h, 84% release) compared to fast release in USP 4 flow-through implant cells (t50%=1h, 100% release). Sunitinib release in saline from microspheres enclosed in dialysis inserts was prolonged and incomplete (t50%=9 days, 68% release) due to low drug diffusion through the dialysis membrane. The slow-release profile fitted best to low sunitinib plasma AUC following injection of sunitinib-eluting spheres. Although limited by lack of standardization, release in the orbital shaker fitted best to local in vivo sunitinib concentrations. Drug release in USP flow-through implant cells was too fast to correlate with local concentrations, although this method is preferred to discriminate between different sphere types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: This study was undertaken to determine whether use of the direct renin inhibitor aliskiren would reduce cardiovascular and renal events in patients with type 2 diabetes and chronic kidney disease, cardiovascular disease, or both. METHODS: In a double-blind fashion, we randomly assigned 8561 patients to aliskiren (300 mg daily) or placebo as an adjunct to an angiotensin-converting-enzyme inhibitor or an angiotensin-receptor blocker. The primary end point was a composite of the time to cardiovascular death or a first occurrence of cardiac arrest with resuscitation; nonfatal myocardial infarction; nonfatal stroke; unplanned hospitalization for heart failure; end-stage renal disease, death attributable to kidney failure, or the need for renal-replacement therapy with no dialysis or transplantation available or initiated; or doubling of the baseline serum creatinine level. RESULTS: The trial was stopped prematurely after the second interim efficacy analysis. After a median follow-up of 32.9 months, the primary end point had occurred in 783 patients (18.3%) assigned to aliskiren as compared with 732 (17.1%) assigned to placebo (hazard ratio, 1.08; 95% confidence interval [CI], 0.98 to 1.20; P=0.12). Effects on secondary renal end points were similar. Systolic and diastolic blood pressures were lower with aliskiren (between-group differences, 1.3 and 0.6 mm Hg, respectively) and the mean reduction in the urinary albumin-to-creatinine ratio was greater (between-group difference, 14 percentage points; 95% CI, 11 to 17). The proportion of patients with hyperkalemia (serum potassium level, ≥6 mmol per liter) was significantly higher in the aliskiren group than in the placebo group (11.2% vs. 7.2%), as was the proportion with reported hypotension (12.1% vs. 8.3%) (P<0.001 for both comparisons). CONCLUSIONS: The addition of aliskiren to standard therapy with renin-angiotensin system blockade in patients with type 2 diabetes who are at high risk for cardiovascular and renal events is not supported by these data and may even be harmful. (Funded by Novartis; ALTITUDE ClinicalTrials.gov number, NCT00549757.).