995 resultados para Risk assumption
Resumo:
We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.
Resumo:
Standard procedures for forecasting flood risk (Bulletin 17B) assume annual maximum flood (AMF) series are stationary, meaning the distribution of flood flows is not significantly affected by climatic trends/cycles, or anthropogenic activities within the watershed. Historical flood events are therefore considered representative of future flood occurrences, and the risk associated with a given flood magnitude is modeled as constant over time. However, in light of increasing evidence to the contrary, this assumption should be reconsidered, especially as the existence of nonstationarity in AMF series can have significant impacts on planning and management of water resources and relevant infrastructure. Research presented in this thesis quantifies the degree of nonstationarity evident in AMF series for unimpaired watersheds throughout the contiguous U.S., identifies meteorological, climatic, and anthropogenic causes of this nonstationarity, and proposes an extension of the Bulletin 17B methodology which yields forecasts of flood risk that reflect climatic influences on flood magnitude. To appropriately forecast flood risk, it is necessary to consider the driving causes of nonstationarity in AMF series. Herein, large-scale climate patterns—including El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO)—are identified as influencing factors on flood magnitude at numerous stations across the U.S. Strong relationships between flood magnitude and associated precipitation series were also observed for the majority of sites analyzed in the Upper Midwest and Northeastern regions of the U.S. Although relationships between flood magnitude and associated temperature series are not apparent, results do indicate that temperature is highly correlated with the timing of flood peaks. Despite consideration of watersheds classified as unimpaired, analyses also suggest that identified change-points in AMF series are due to dam construction, and other types of regulation and diversion. Although not explored herein, trends in AMF series are also likely to be partially explained by changes in land use and land cover over time. Results obtained herein suggest that improved forecasts of flood risk may be obtained using a simple modification of the Bulletin 17B framework, wherein the mean and standard deviation of the log-transformed flows are modeled as functions of climate indices associated with oceanic-atmospheric patterns (e.g. AMO, ENSO, NAO, and PDO) with lead times between 3 and 9 months. Herein, one-year ahead forecasts of the mean and standard deviation, and subsequently flood risk, are obtained by applying site specific multivariate regression models, which reflect the phase and intensity of a given climate pattern, as well as possible impacts of coupling of the climate cycles. These forecasts of flood risk are compared with forecasts derived using the existing Bulletin 17B model; large differences in the one-year ahead forecasts are observed in some locations. The increased knowledge of the inherent structure of AMF series and an improved understanding of physical and/or climatic causes of nonstationarity gained from this research should serve as insight for the formulation of a physical-casual based statistical model, incorporating both climatic variations and human impacts, for flood risk over longer planning horizons (e.g., 10-, 50, 100-years) necessary for water resources design, planning, and management.
Resumo:
The presented approach describes a model for a rule-based expert system calculating the temporal variability of the release of wet snow avalanches, using the assumption of avalanche triggering without the loading of new snow. The knowledge base of the model is created by using investigations on the system behaviour of wet snow avalanches in the Italian Ortles Alps, and is represented by a fuzzy logic rule-base. Input parameters of the expert system are numerical and linguistic variables, measurable meteorological and topographical factors and observable characteristics of the snow cover. Output of the inference method is the quantified release disposition for wet snow avalanches. Combining topographical parameters and the spatial interpolation of the calculated release disposition a hazard index map is dynamically generated. Furthermore, the spatial and temporal variability of damage potential on roads exposed to wet snow avalanches can be quantified, expressed by the number of persons at risk. The application of the rule base to the available data in the study area generated plausible results. The study demonstrates the potential for the application of expert systems and fuzzy logic in the field of natural hazard monitoring and risk management.
Resumo:
BACKGROUND Ductal carcinoma in situ (DCIS) is a noninvasive breast lesion with uncertain risk for invasive progression. Usual care (UC) for DCIS consists of treatment upon diagnosis, thus potentially overtreating patients with low propensity for progression. One strategy to reduce overtreatment is active surveillance (AS), whereby DCIS is treated only upon detection of invasive disease. Our goal was to perform a quantitative evaluation of outcomes following an AS strategy for DCIS. METHODS Age-stratified, 10-year disease-specific cumulative mortality (DSCM) for AS was calculated using a computational risk projection model based upon published estimates for natural history parameters, and Surveillance, Epidemiology, and End Results data for outcomes. AS projections were compared with the DSCM for patients who received UC. To quantify the propagation of parameter uncertainty, a 95% projection range (PR) was computed, and sensitivity analyses were performed. RESULTS Under the assumption that AS cannot outperform UC, the projected median differences in 10-year DSCM between AS and UC when diagnosed at ages 40, 55, and 70 years were 2.6% (PR = 1.4%-5.1%), 1.5% (PR = 0.5%-3.5%), and 0.6% (PR = 0.0%-2.4), respectively. Corresponding median numbers of patients needed to treat to avert one breast cancer death were 38.3 (PR = 19.7-69.9), 67.3 (PR = 28.7-211.4), and 157.2 (PR = 41.1-3872.8), respectively. Sensitivity analyses showed that the parameter with greatest impact on DSCM was the probability of understaging invasive cancer at diagnosis. CONCLUSION AS could be a viable management strategy for carefully selected DCIS patients, particularly among older age groups and those with substantial competing mortality risks. The effectiveness of AS could be markedly improved by reducing the rate of understaging.
Resumo:
The central paradigm linking disadvantaged social status and mental health has been the social stress model (Horwitz, 1999), the assumption being that individuals residing in lower social status groups are subjected to greater levels of stress not experienced by individuals from higher status groups. A further assumption is that such individuals have fewer resources to cope with stress, in turn leading to higher levels of psychological disorder, including depression (Pearlin, 1989). Despite these key assumptions, there is a dearth of literature comparing the social patterning of stress exposure (Hatch & Dohrenwend, 2007; Meyer, Schwartz, & Frost, 2008; Kessler, Mickelson, & Williams, 1999; Turner & Avison, 2003; Turner & Lloyd, 1999; Turner, Wheaton, & Lloyd, 1995), and the distribution and contribution of protective factors, posited to play a role in the low rates of depression found among African- and Latino-Americans (Alegria et al., 2007; Breslau, Aguilar-Gaxiola, Kendler, Su, Williams, & Kessler, 2006; Breslau, Borges, Hagar, Tancredi, Gilman, 2009; Gavin, Walton, Chae, Alegria, Jackson, & Takeuchi, 2010; Williams, & Neighbors, 2006). Thus, this study sought to describe both the distribution and contribution of risk and protective factors in relation to depression among a sample of African-, European-, and Latina-American mothers of adolescents, including testing a hypothesized mechanism through which social support, an important protective factor specific to women and depression, operates. ^ Despite the finding that the levels of depression were not statistically different across all three groups of women, surprising results were found in describing the distribution of both risk and protective factors, in that results reported among all women who were mothers when analyzed masked differences within each ethnic group when SES was assessed, a point made explicit by Williams (2002) regarding racial and ethnic variations in women's health. In the final analysis, while perceived social support was found to partially mediate the effect of social isolation on depression, among African-Americans, the direct effect of social isolation and depression was lower among this group of women, as was the indirect effect of social isolation and perceived social support when compared to European- and Latina-American mothers. Or, put differently, higher levels of social isolation were not found to be as associated with more depression or lower social support among African-American mothers when compared to their European- and Latina-American counterparts. ^ Women in American society occupy a number of roles, i.e., that of being female, married or single, mother, homemaker or employee. In addition, to these roles, ethnicity and SES also come into play, such that the intersection of all these roles and the social contexts that they occupy are equally important and must be taken into consideration when making predictions drawn from the social stress model. Based on these findings, it appears that the assumptions of the social stress model need to be revisited to include the variety of roles that intersect among individuals from differing social groups. More specifically, among women who are mothers and occupy a myriad of other roles, i.e., that of being female, married or single, African- or Latina-American, mother, homemaker or employee, the intersection of all the roles and the social contexts that women occupy are equally important and must be taken into consideration when looking at both the types and distribution of stressors across women. Predictions based on simple, mutually exclusive categories of social groups may lead to erroneous assumptions and misleading results.^
Resumo:
This paper discusses a model based on the agency theory to analyze the optimal transfer of construction risk in public works contracts. The base assumption is that of a contract between a principal (public authority) and an agent (firm), where the payment mechanism is linear and contains an incentive mechanism to enhance the effort of the agent to reduce construction costs. A theoretical model is proposed starting from a cost function with a random component and assuming that both the public authority and the firm are risk averse. The main outcome of the paper is that the optimal transfer of construction risk will be lower when the variance of errors in cost forecast, the risk aversion of the firm and the marginal cost of public funds are larger, while the optimal transfer of construction risk will grow when the variance of errors in cost monitoring and the risk aversion of the public authority are larger
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Resumo:
The western honey bee, Apis mellifera L., is currently the model specie for pesticide risk assessment on pollinators with the assumption that the worst-case scenarios for this species are sufficiently conservative to protect other insect pollinators. However, recent studies have showed that wild species may be more sensitive to plant protection products, due to differences in biology and life cycles. Therefore, there is the need to extend the risk assessment within a more ecological approach, in order to ensure that there are no irreversible effects on non-target organisms and in the environment. My dissertation aims to expand the risk assessment to other insect pollinators (including wild and managed pollinators), in order to cover some of the gaps of the current schemes. In this thesis, it is presented three experiments that cover the early stages of a solitary bee (chapter 1), the development of molecular tools for early detection of sub-lethal effects (chapter 2) and the development of protocols to access lethal and sub-lethal effects on other pollinator taxa (Diptera; chapter 3).
Resumo:
To evaluate associations between polymorphisms of the N-acetyltransferase 2 (NAT2), human 8-oxoguanine glycosylase 1 (hOGG1) and X-ray repair cross-complementing protein 1 (XRCC1) genes and risk of upper aerodigestive tract (UADT) cancer. A case-control study involving 117 cases and 224 controls was undertaken. The NAT2 gene polymorphisms were genotyped by automated sequencing and XRCC1 Arg399Gln and hOGG1 Ser326Cys polymorphisms were determined by Polymerase Chain Reaction followed by Restriction Fragment Length Polymorphism (PCR-RFLP) methods. Slow metabolization phenotype was significantly associated as a risk factor for the development of UADT cancer (p=0.038). Furthermore, haplotype of slow metabolization was also associated with UADT cancer (p=0.014). The hOGG1 Ser326Cys polymorphism (CG or GG vs. CC genotypes) was shown as a protective factor against UADT cancer in moderate smokers (p=0.031). The XRCC1 Arg399Gln polymorphism (GA or AA vs. GG genotypes), in turn, was a protective factor against UADT cancer only among never-drinkers (p=0.048). Interactions involving NAT2, XRCC1 Arg399Gln and hOGG1 Ser326Cys polymorphisms may modulate the risk of UADT cancer in this population.
Resumo:
To analyze associations between mammographic arterial mammary calcifications in menopausal women and risk factors for cardiovascular disease. This was a cross-sectional retrospective study, in which we analyzed the mammograms and medical records of 197 patients treated between 2004 and 2005. Study variables were: breast arterial calcifications, stroke, acute coronary syndrome, age, obesity, diabetes mellitus, smoking, and hypertension. For statistical analysis, we used the Mann-Whitney, χ2 and Cochran-Armitage tests, and also evaluated the prevalence ratios between these variables and mammary artery calcifications. Data were analyzed with the SAS version 9.1 software. In the group of 197 women, there was a prevalence of 36.6% of arterial calcifications on mammograms. Among the risk factors analyzed, the most frequent were hypertension (56.4%), obesity (31.9%), smoking (15.2%), and diabetes (14.7%). Acute coronary syndrome and stroke presented 5.6 and 2.0% of prevalence, respectively. Among the mammograms of women with diabetes, the odds ratio of mammary artery calcifications was 2.1 (95%CI 1.0-4.1), with p-value of 0.02. On the other hand, the mammograms of smokers showed the low occurrence of breast arterial calcification, with an odds ratio of 0.3 (95%CI 0.1-0.8). Hypertension, obesity, diabetes mellitus, stroke and acute coronary syndrome were not significantly associated with breast arterial calcification. The occurrence of breast arterial calcification was associated with diabetes mellitus and was negatively associated with smoking. The presence of calcification was independent of the other risk factors for cardiovascular disease analyzed.
Resumo:
Urinary tract infection (UTI) is the most common infection posttransplant. However, the risk factors for and the impact of UTIs remain controversial. The aim of this study was to identify the incidence of posttransplant UTIs in a series of renal transplant recipients from deceased donors. Secondary objectives were to identify: (1) the most frequent infectious agents; (2) risk factors related to donor; (3) risk factors related to recipients; and (4) impact of UTI on graft function. This was a retrospective analysis of medical records from renal transplant patients from January to December 2010. Local ethics committee approved the protocol. The incidence of UTI in this series was 34.2%. Risk factors for UTI were older age, (independent of gender), biopsy-proven acute rejection episodes, and kidneys from deceased donors (United Network for Organ Sharing criteria). For female patients, the number of pretransplant pregnancies was an additional risk factor. Recurrent UTI was observed in 44% of patients from the UTI group. The most common infectious agents were Escherichia coli and Klebsiella pneumoniae, for both isolated and recurrent UTI. No difference in renal graft function or immunosuppressive therapy was observed between groups after the 1-year follow-up. In this series, older age, previous pregnancy, kidneys from expanded criteria donors, and biopsy-proven acute rejection episodes were risk factors for posttransplant UTI. Recurrence of UTI was observed in 44%, with no negative impact on graft function or survival.
Resumo:
The aim of the present study was to identify factors associated with the occurrence of falls among elderly adults in a population-based study (ISACamp 2008). A population-based cross-sectional study was carried out with two-stage cluster sampling. The sample was composed of 1,520 elderly adults living in the urban area of the city of Campinas, São Paulo, Brazil. The occurrence of falls was analyzed based on reports of the main accident occurred in the previous 12 months. Data on socioeconomic/demographic factors and adverse health conditions were tested for possible associations with the outcome. Prevalence ratios (PR) were estimated and adjusted for gender and age using the Poisson multiple regression analysis. Falls were more frequent, after adjustment for gender and age, among female elderly participants (PR = 2.39; 95% confidence interval (95% CI) 1.47 - 3.87), elderly adults (80 years old and older) (PR = 2.50; 95% CI 1.61 - 3.88), widowed (PR = 1.74; 95% CI 1.04 - 2.89) and among elderly adults who had rheumatism/arthritis/arthrosis (PR = 1.58; 95% CI 1.00 - 2.48), osteoporosis (PR = 1.71; 95% CI 1.18 - 2.49), asthma/bronchitis/emphysema (PR = 1,73; 95% CI 1.09 - 2.74), headache (PR = 1.59; 95% CI 1.07 - 2.38), mental common disorder (PR = 1.72; 95% CI 1.12 - 2.64), dizziness (PR = 2.82; 95% CI 1.98 - 4.02), insomnia (PR = 1.75; 95% CI 1.16 - 2.65), use of multiple medications (five or more) (PR = 2.50; 95% CI 1.12 - 5.56) and use of cane/walker (PR = 2.16; 95% CI 1.19 - 3,93). The present study shows segments of the elderly population who are more prone to falls through the identification of factors associated with this outcome. The findings can contribute to the planning of public health policies and programs addressed to the prevention of falls.
Resumo:
The aim of this study was to determine the frequency of leukemia in parents of patients with nonsyndromic cleft lip and/or cleft palate (NSCL/P). This case-control study evaluated first-degree family members of 358 patients with NSCL/P and 1,432 subjects without craniofacial alterations or syndromes. Statistical analysis was carried out using Fisher's test. From the 358 subjects with NSCL/P, 3 first-degree parents had history of leukemia, while 2 out of 1,432 subjects from the unaffected group had a family history of leukemia. The frequency of positive family history of leukemia was not significantly increased in first-degree relatives of patients with NSCL/P.
Resumo:
This study tested whether myocardial extracellular volume (ECV) is increased in patients with hypertension and atrial fibrillation (AF) undergoing pulmonary vein isolation and whether there is an association between ECV and post-procedural recurrence of AF. Hypertension is associated with myocardial fibrosis, an increase in ECV, and AF. Data linking these findings are limited. T1 measurements pre-contrast and post-contrast in a cardiac magnetic resonance (CMR) study provide a method for quantification of ECV. Consecutive patients with hypertension and recurrent AF referred for pulmonary vein isolation underwent a contrast CMR study with measurement of ECV and were followed up prospectively for a median of 18 months. The endpoint of interest was late recurrence of AF. Patients had elevated left ventricular (LV) volumes, LV mass, left atrial volumes, and increased ECV (patients with AF, 0.34 ± 0.03; healthy control patients, 0.29 ± 0.03; p < 0.001). There were positive associations between ECV and left atrial volume (r = 0.46, p < 0.01) and LV mass and a negative association between ECV and diastolic function (early mitral annular relaxation [E'], r = -0.55, p < 0.001). In the best overall multivariable model, ECV was the strongest predictor of the primary outcome of recurrent AF (hazard ratio: 1.29; 95% confidence interval: 1.15 to 1.44; p < 0.0001) and the secondary composite outcome of recurrent AF, heart failure admission, and death (hazard ratio: 1.35; 95% confidence interval: 1.21 to 1.51; p < 0.0001). Each 10% increase in ECV was associated with a 29% increased risk of recurrent AF. In patients with AF and hypertension, expansion of ECV is associated with diastolic function and left atrial remodeling and is a strong independent predictor of recurrent AF post-pulmonary vein isolation.
Resumo:
Focal cryoablation (FC), brachytherapy (B) and active surveillance (AS) were offered to patients diagnosed with very low-risk prostate cancer (VLRPC) in an equal access protocol. Comprehensive validated self-report questionnaires accessed patients' erectile (IIEF-5) and voiding (IPSS) functions, Beck scales measured anxiety (BAI), hopelessness (BHS) and depression (BDI), SF-36 reflected patients' quality of life added to the emotional thermometers including five visual analogue scales (distress, anxiety, depression, anger and need for help). Kruskal-Wallis or ANOVA tests and Spearman's correlations were obtained among groups and studied variables. Thirty patients were included, median follow-up 18 months (15-21). Those on AS (n = 11) were older, presented higher hopelessness (BHS) and lower general health perceptions (SF-36) scores than patients opting for FC (n = 10) and B (n = 9), P = 0.0014, P = 0.0268 and P = 0.0168 respectively. Patients on B had higher IPSS scores compared to those under FC and AC, P = 0.0223. For all 30 included patients, Spearman's correlation (rs ) was very strong between BHS and general health perceptions (rs = -0.800, P < 0.0001), and weak/moderate between age and BHS (rs = 0.405, P = 0.026) and age and general health perceptions (rs = -0.564, P = 0.001). The sample power was >60%. To be considered in patients' counselling and care, current study supports the hypothesis that even VLRPC when untreated undermines psychosocial domains.