66 resultados para Future value prediction

em Université de Lausanne, Switzerland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Developments in the field of neuroscience have created a high level of interest in the subject of adolescent psychosis, particularly in relation to prediction and prevention. As the medical practice of adolescent psychosis and its treatment is characterised by a heterogeneity which is both symptomatic and evolutive, the somewhat poor prognosis of chronic development justifies the research performed: apparent indicators of schizophrenic disorders on the one hand and specific endophenotypes on the other are becoming increasingly important. The significant progresses made on the human genome show that the genetic predetermination in current psychiatric pathologies is complex and subject to moderating effects and there is therefore significant potential for nature-nurture interactions (between the environment and the genes). The road to be followed in researching the phenotypic expression of a psychosis gene is long and winding and is susceptible to many external influences at various levels with different effects. Neurobiological, neurophysiological, neuropsychological and neuroanatomical studies help to identify endophenotypes, which allow researchers to create identifying "markers" along this winding road. The endophenotypes could make it possible to redefine the nosological categories and enhance understanding of the physiopathology of schizophrenia. In a predictive approach, large-scale retrospective and prospective studies make it possible to identify risk factors, which are compatible with the neurodevelopmental hypothesis of schizophrenia. However, the predictive value of such markers or risk indicators is not yet sufficiently developed to offer a reliable early-detection method or possible schizophrenia prevention measures. Nonetheless, new developments show promise against the background of a possible future nosographic revolution, based on a paradigm shift. It is perhaps on the basis of homogeneous endophenotypes in particular that we will be able to understand what protects against, or indeed can trigger, psychosis irrespective of the clinical expression or attempts to isolate the common genetic and biological bases according to homogeneous clinical characteristics, which have to date, proved unsuccessful

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: The goal of our study was to compare Doppler sonography and renal scintigraphy as tools for predicting the therapeutic response in patients after undergoing renal angioplasty. SUBJECTS AND METHODS. Seventy-four hypertensive patients underwent clinical examination, Doppler sonography, and renal scintigraphy before and after receiving captopril in preparation for renal revascularization. The patients were evaluated for the status of hypertension 3 months after the procedure. The predictive values of the findings of clinical examination, Doppler sonography, renal scintigraphy, and angiography were assessed. RESULTS: For prediction of a favorable therapeutic outcome, abnormal results from renal scintigraphy before and after captopril administration had a sensitivity of 58% and specificity of 57%. Findings of Doppler sonography had a sensitivity of 68% and specificity of 50% before captopril administration and a sensitivity of 81% and specificity of 32% after captopril administration. Significant predictors of a cure or reduction of hypertension after revascularization were low unilateral (p = 0.014) and bilateral resistive (p = 0.016) indexes on Doppler sonography before (p = 0.009) and after (p = 0.028) captopril administration. On multivariate analysis, the best predictors were a unilateral resistive index of less than 0.65 (odds ratio [OR] = 3.7) after captopril administration and a kidney longer than 93 mm (OR = 7.8). The two best combined criteria to predict the favorable therapeutic outcome were a bilateral resistive index of less than 0.75 before captopril administration combined with a unilateral resistive index of less than 0.70 after captopril administration (sensitivity, 76%; specificity, 58%) or a bilateral resistive index of less than 0.75 before captopril administration and a kidney measuring longer than 90 mm (sensitivity, 81%; specificity, 50%). CONCLUSION: Measurements of kidney length and unilateral and bilateral resistive indexes before and after captopril administration were useful in predicting the outcome after renal angioplasty. Renal scintigraphy had no significant predictive value.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although both inflammatory and atherosclerosis markers have been associated with coronary heart disease (CHD) risk, data directly comparing their predictive value are limited. The authors compared the value of 2 atherosclerosis markers (ankle-arm index (AAI) and aortic pulse wave velocity (aPWV)) and 3 inflammatory markers (C-reactive protein (CRP), interleukin-6 (IL-6), and tumor necrosis factor-alpha (TNF-alpha)) in predicting CHD events. Among 2,191 adults aged 70-79 years at baseline (1997-1998) from the Health, Aging, and Body Composition Study cohort, the authors examined adjudicated incident myocardial infarction or CHD death ("hard" events) and "hard" events plus hospitalization for angina or coronary revascularization (total CHD events). During 8 years of follow-up between 1997-1998 and June 2007, 351 participants developed total CHD events (197 "hard" events). IL-6 (highest quartile vs. lowest: hazard ratio = 1.82, 95% confidence interval: 1.33, 2.49; P-trend < 0.001) and AAI (AAI </= 0.9 vs. AAI 1.01-1.30: hazard ratio = 1.57, 95% confidence interval: 1.14, 2.18) predicted CHD events above traditional risk factors and modestly improved global measures of predictive accuracy. CRP, TNF-alpha, and aPWV had weaker associations. IL-6 and AAI accurately reclassified 6.6% and 3.3% of participants, respectively (P's </= 0.05). Results were similar for "hard" CHD, with higher reclassification rates for AAI. IL-6 and AAI are associated with future CHD events beyond traditional risk factors and modestly improve risk prediction in older adults.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: Therapeutic hypothermia and pharmacological sedation may influence outcome prediction after cardiac arrest. The use of a multimodal approach, including clinical examination, electroencephalography, somatosensory-evoked potentials, and serum neuron-specific enolase, is recommended; however, no study examined the comparative performance of these predictors or addressed their optimal combination. DESIGN: Prospective cohort study. SETTING: Adult ICU of an academic hospital. PATIENTS: One hundred thirty-four consecutive adults treated with therapeutic hypothermia after cardiac arrest. MEASUREMENTS AND MAIN RESULTS: Variables related to the cardiac arrest (cardiac rhythm, time to return of spontaneous circulation), clinical examination (brainstem reflexes and myoclonus), electroencephalography reactivity during therapeutic hypothermia, somatosensory-evoked potentials, and serum neuron-specific enolase. Models to predict clinical outcome at 3 months (assessed using the Cerebral Performance Categories: 5 = death; 3-5 = poor recovery) were evaluated using ordinal logistic regressions and receiving operator characteristic curves. Seventy-two patients (54%) had a poor outcome (of whom, 62 died), and 62 had a good outcome. Multivariable ordinal logistic regression identified absence of electroencephalography reactivity (p < 0.001), incomplete recovery of brainstem reflexes in normothermia (p = 0.013), and neuron-specific enolase higher than 33 μg/L (p = 0.029), but not somatosensory-evoked potentials, as independent predictors of poor outcome. The combination of clinical examination, electroencephalography reactivity, and neuron-specific enolase yielded the best predictive performance (receiving operator characteristic areas: 0.89 for mortality and 0.88 for poor outcome), with 100% positive predictive value. Addition of somatosensory-evoked potentials to this model did not improve prognostic accuracy. CONCLUSIONS: Combination of clinical examination, electroencephalography reactivity, and serum neuron-specific enolase offers the best outcome predictive performance for prognostication of early postanoxic coma, whereas somatosensory-evoked potentials do not add any complementary information. Although prognostication of poor outcome seems excellent, future studies are needed to further improve prediction of good prognosis, which still remains inaccurate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A prospective study was undertaken to determine prognostic markers for patients with obstructive jaundice. Along with routine liver function tests, antipyrine clearance was determined in 20 patients. Four patients died after basal investigations. Five patients underwent definitive surgery. The remaining 11 patients were subjected to percutaneous transhepatic biliary decompression. Four patients died during the drainage period, while surgery was carried out for seven patients within 1-3 weeks of drainage. Of 20 patients, only six patients survived. Basal liver function tests were comparable in survivors and nonsurvivors. Discriminant analysis of the basal data revealed that plasma bilirubin, proteins and antipyrine half-life taken together had a strong association with mortality. A mathematical equation was derived using these variables and a score was computed for each patient. It was observed that a score value greater than or equal to 0.84 indicated survival. Omission of antipyrine half-life from the data, however, resulted in prediction of false security in 55% of patients. This study highlights the importance of addition of antipyrine elimination test to the routine liver function tests for precise identification of high risk patients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Screening of elevated blood pressure (BP) in children has been advocated to early identify hypertension. However, identification of children with sustained elevated BP is challenging due to the high BP variability. The value of an elevated BP measure during childhood and adolescence for the prediction of future elevated BP is not well described. Objectives: We assessed the positive (PPV) and negative (NPV) predictive value of high BP for sustained elevated BP in cohorts of children of the Seychelles, a rapidly developing island state in the African region. Methods: Serial school-based surveys of weight, height, and BP were conducted yearly between 1998-2006 among all students of the country in four school grades (kindergarten [G0, mean age (SD): 5.5 (0.4) yr], G4 [9.2 (0.4) yr], G7 [12.5 (0.4) yr] and G10 (15.6 (0.5) yr]. We constituted three cohorts of children examined twice at 3-4 years interval: 4,557 children examined at G0 and G4, 6,198 at G4 and G7, and 6,094 at G7 and G10. The same automated BP measurement devices were used throughout the study. BP was measured twice at each exam and averaged. Obesity and elevated BP were defined using the CDC (BMI_95th sex-, and age-specific percentile) and the NHBPEP criteria (BP_95th sex-, age-, and height specific percentile), respectively. Results: Prevalence of obesity was 6.1% at G0, 7.1% at G4, 7.5% at G7, and 6.5% at G10. Prevalence of elevated BP was 10.2% at G0, 9.9% at G4, 7.1% at G7, and 8.7% at G10. Among children with elevated BP at initial exam, the PPV of keeping elevated BP was low but increased with age: 13% between G0 and G4, 19% between G4 and G7, and 27% between G7 and G10. Among obese children with elevated BP, the PPV was higher: 33%, 35% and 39% respectively. Overall, the probability for children with normal BP to remain in that category 3-4 years later (NPV) was 92%, 95%, and 93%, respectively. By comparison, the PPV for children initially obese to remain obese was much higher at 71%, 71%, and 62% (G7-G10), respectively. The NPV (i.e. the probability of remaining at normal weight) was 94%, 96%, and 98%, respectively. Conclusion: During childhood and adolescence, having an elevated BP at one occasion is a weak predictor of sustained elevated BP 3-4 years later. In obese children, it is a better predictor.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses primary care physicians, cardiologists, internists, angiologists and doctors desirous of improving vascular risk prediction in primary care. Many cardiovascular risk factors act aggressively on the arterial wall and result in atherosclerosis and atherothrombosis. Cardiovascular prognosis derived from ultrasound imaging is, however, excellent in subjects without formation of intimal thickening or atheromas. Since ultrasound visualises the arterial wall directly, the information derived from the arterial wall may add independent incremental information to the knowledge of risk derived from global risk assessment. This paper provides an overview on plaque imaging for vascular risk prediction in two parts: Part 1: Carotid IMT is frequently used as a surrogate marker for outcome in intervention studies addressing rather large cohorts of subjects. Carotid IMT as a risk prediction tool for the prevention of acute myocardial infarction and stroke has been extensively studied in many patients since 1987, and has yielded incremental hazard ratios for these cardiovascular events independently of established cardiovascular risk factors. However, carotid IMT measurements are not used uniformly and therefore still lack widely accepted standardisation. Hence, at an individual, practicebased level, carotid IMT is not recommended as a risk assessment tool. The total plaque area of the carotid arteries (TPA) is a measure of the global plaque burden within both carotid arteries. It was recently shown in a large Norwegian cohort involving over 6000 subjects that TPA is a very good predictor for future myocardial infarction in women with an area under the curve (AUC) using a receiver operating curves (ROC) value of 0.73 (in men: 0.63). Further, the AUC for risk prediction is high both for vascular death in a vascular prevention clinic group (AUC 0.77) and fatal or nonfatal myocardial infarction in a true primary care group (AUC 0.79). Since TPA has acceptable reproducibility, allows calculation of posttest risk and is easily obtained at low cost, this risk assessment tool may come in for more widespread use in the future and also serve as a tool for atherosclerosis tracking and guidance for intensity of preventive therapy. However, more studies with TPA are needed. Part 2: Carotid and femoral plaque formation as detected by ultrasound offers a global view of the extent of atherosclerosis. Several prospective cohort studies have shown that cardiovascular risk prediction is greater for plaques than for carotid IMT. The number of arterial beds affected by significant atheromas may simply be added numerically to derive additional information on the risk of vascular events. A new atherosclerosis burden score (ABS) simply calculates the sum of carotid and femoral plaques encountered during ultrasound scanning. ABS correlates well and independently with the presence of coronary atherosclerosis and stenosis as measured by invasive coronary angiogram. However, the prognostic power of ABS as an independent marker of risk still needs to be elucidated in prospective studies. In summary, the large number of ways to measure atherosclerosis and related changes in human arteries by ultrasound indicates that this technology is not yet sufficiently perfected and needs more standardisation and workup on clearly defined outcome studies before it can be recommended as a practice-based additional risk modifier.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Georgia is known for its extraordinary rich biodiversity of plants, which may now be threatened due to the spread of invasive alien plants (IAP). We aimed to identify (i) the most prominent IAP out of 9 selected potentially invasive and harmful IAP IAP by predicting thetheir distribution of 9 selected IAP under current and future climate conditions in Georgia as well as in its 43 Protected Areas, as a proxy for areas of high conservation value and (ii) the Protected Areas most at risk due to these IAP. We used species distribution models based on 6 climate variables and then filtered the obtained distributions based on maps of soil and vegetation types, and on recorded occurrences, resulting into the predicted ecological distribution of the 9 IAP's at a resolution of 1km2. We foundOur habitat suitability analysis showed that Ambrosia artemisiifolia, (24% and 40%) Robinia pseudoacaia (14% and 19%) and Ailanthus altissima (9% and 11%) have the largest potential distribution are the most abundant (predicted % area covered)d) IAP, with Ailanthus altissima the potentially most increasing one over the next fifty years (from 9% to 13% and from 11% to 25%), for Georgia and the Protected Areas, respectively. Furthermore, our results show indicate two areas in Georgia that are under specifically high threat, i.e. the area around Tbilisi and an area in the western part of Georgia (Adjara), both at lower altitudes. Our procedure to identify areas of high conservation value most at risk by IAP has been applied for the first time. It will help national authorities in prioritizing their measures to protect Georgia's outstanding biodiversity from the negative impact of IAP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS/HYPOTHESIS: Several susceptibility genes for type 2 diabetes have been discovered recently. Individually, these genes increase the disease risk only minimally. The goals of the present study were to determine, at the population level, the risk of diabetes in individuals who carry risk alleles within several susceptibility genes for the disease and the added value of this genetic information over the clinical predictors. METHODS: We constructed an additive genetic score using the most replicated single-nucleotide polymorphisms (SNPs) within 15 type 2 diabetes-susceptibility genes, weighting each SNP with its reported effect. We tested this score in the extensively phenotyped population-based cross-sectional CoLaus Study in Lausanne, Switzerland (n = 5,360), involving 356 diabetic individuals. RESULTS: The clinical predictors of prevalent diabetes were age, BMI, family history of diabetes, WHR, and triacylglycerol/HDL-cholesterol ratio. After adjustment for these variables, the risk of diabetes was 2.7 (95% CI 1.8-4.0, p = 0.000006) for individuals with a genetic score within the top quintile, compared with the bottom quintile. Adding the genetic score to the clinical covariates improved the area under the receiver operating characteristic curve slightly (from 0.86 to 0.87), yet significantly (p = 0.002). BMI was similar in these two extreme quintiles. CONCLUSIONS/INTERPRETATION: In this population, a simple weighted 15 SNP-based genetic score provides additional information over clinical predictors of prevalent diabetes. At this stage, however, the clinical benefit of this genetic information is limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Little information is available on the validity of simple and indirect body-composition methods in non-Western populations. Equations for predicting body composition are population-specific, and body composition differs between blacks and whites. OBJECTIVE: We tested the hypothesis that the validity of equations for predicting total body water (TBW) from bioelectrical impedance analysis measurements is likely to depend on the racial background of the group from which the equations were derived. DESIGN: The hypothesis was tested by comparing, in 36 African women, TBW values measured by deuterium dilution with those predicted by 23 equations developed in white, African American, or African subjects. These cross-validations in our African sample were also compared, whenever possible, with results from other studies in black subjects. RESULTS: Errors in predicting TBW showed acceptable values (1.3-1.9 kg) in all cases, whereas a large range of bias (0.2-6.1 kg) was observed independently of the ethnic origin of the sample from which the equations were derived. Three equations (2 from whites and 1 from blacks) showed nonsignificant bias and could be used in Africans. In all other cases, we observed either an overestimation or underestimation of TBW with variable bias values, regardless of racial background, yielding no clear trend for validity as a function of ethnic origin. CONCLUSIONS: The findings of this cross-validation study emphasize the need for further fundamental research to explore the causes of the poor validity of TBW prediction equations across populations rather than the need to develop new prediction equations for use in Africa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACuteTox is a project within the 6th European Framework Programme which had as one of its goals to develop, optimise and prevalidate a non-animal testing strategy for predicting human acute oral toxicity. In its last 6 months, a challenging exercise was conducted to assess the predictive capacity of the developed testing strategies and final identification of the most promising ones. Thirty-two chemicals were tested blind in the battery of in vitro and in silico methods selected during the first phase of the project. This paper describes the classification approaches studied: single step procedures and two step tiered testing strategies. In summary, four in vitro testing strategies were proposed as best performing in terms of predictive capacity with respect to the European acute oral toxicity classification. In addition, a heuristic testing strategy is suggested that combines the prediction results gained from the neutral red uptake assay performed in 3T3 cells, with information on neurotoxicity alerts identified by the primary rat brain aggregates test method. Octanol-water partition coefficients and in silico prediction of intestinal absorption and blood-brain barrier passage are also considered. This approach allows to reduce the number of chemicals wrongly predicted as not classified (LD50>2000 mg/kg b.w.).