815 resultados para Relative risk aversion
Resumo:
Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.
Resumo:
Biomarkers are currently best used as mechanistic "signposts" rather than as "traffic lights" in the environmental risk assessment of endocrine-disrupting chemicals (EDCs). In field studies, biomarkers of exposure [e.g., vitellogenin (VTG) induction in male fish] are powerful tools for tracking single substances and mixtures of concern. Biomarkers also provide linkage between field and laboratory data, thereby playing an important role in directing the need for and design of fish chronic tests for EDCs. It is the adverse effect end points (e.g., altered development, growth, and/or reproduction) from such tests that are most valuable for calculating adverseNOEC (no observed effect concentration) or adverseEC10 (effective concentration for a 10% response) and subsequently deriving predicted no effect concentrations (PNECs). With current uncertainties, biomarkerNOEC or biomarkerEC10 data should not be used in isolation to derive PNECs. In the future, however, there may be scope to increasingly use biomarker data in environmental decision making, if plausible linkages can be made across levels of organization such that adverse outcomes might be envisaged relative to biomarker responses. For biomarkers to fulfil their potential, they should be mechanistically relevant and reproducible (as measured by interlaboratory comparisons of the same protocol). VTG is a good example of such a biomarker in that it provides an insight to the mode of action (estrogenicity) that is vital to fish reproductive health. Interlaboratory reproducibility data for VTG are also encouraging; recent comparisons (using the same immunoassay protocol) have provided coefficients of variation (CVs) of 38-55% (comparable to published CVs of 19-58% for fish survival and growth end points used in regulatory test guidelines). While concern over environmental xenoestrogens has led to the evaluation of reproductive biomarkers in fish, it must be remembered that many substances act via diverse mechanisms of action such that the environmental risk assessment for EDCs is a broad and complex issue. Also, biomarkers such as secondary sexual characteristics, gonadosomatic indices, plasma steroids, and gonadal histology have significant potential for guiding interspecies assessments of EDCs and designing fish chronic tests. To strengthen the utility of EDC biomarkers in fish, we need to establish a historical control database (also considering natural variability) to help differentiate between statistically detectable versus biologically significant responses. In conclusion, as research continues to develop a range of useful EDC biomarkers, environmental decision-making needs to move forward, and it is proposed that the "biomarkers as signposts" approach is a pragmatic way forward in the current risk assessment of EDCs.
Resumo:
OBJECTIVE: To assess the influence of recipient's and donor's factors as well as surgical events on the occurrence of reperfusion injury after lung transplantation. DESIGN AND SETTING: Retrospective study in the surgical intensive care unit (ICU) of a university hospital. METHODS: We collected data on 60 lung transplantation donor/recipient pairs from June 1993 to May 2001, and compared the demographic, peri- and postoperative variables of patients who experienced reperfusion injury (35%) and those who did not. RESULTS: The occurrence of high systolic pulmonary pressure immediately after transplantation and/or its persistence during the first 48[Symbol: see text]h after surgery was associated with reperfusion injury, independently of preoperative values. Reperfusion injury was associated with difficult hemostasis during transplantation (p[Symbol: see text]=[Symbol: see text]0.03). Patients with reperfusion injury were more likely to require the administration of catecholamine during the first 48[Symbol: see text]h after surgery (p[Symbol: see text]=[Symbol: see text]0.014). The extubation was delayed (p[Symbol: see text]=[Symbol: see text]0.03) and the relative odds of ICU mortality were significantly greater (OR 4.8, 95% CI: 1.06, 21.8) in patients with reperfusion injury. Our analysis confirmed that preexisting pulmonary hypertension increased the incidence of reperfusion injury (p[Symbol: see text]<[Symbol: see text]0.01). CONCLUSIONS: Difficulties in perioperative hemostasis were associated with reperfusion injury. Occurrence of reperfusion injury was associated with postoperative systolic pulmonary hypertension, longer mechanical ventilation and higher mortality. Whether early recognition and treatment of pulmonary hypertension during transplantation can prevent the occurrence of reperfusion injury needs to be investigated.
Resumo:
BACKGROUND AND PURPOSE: Time delays from stroke onset to arrival at the hospital are the main obstacles for widespread use of thrombolysis. In order to decrease the delays, educational campaigns try to inform the general public how to act optimally in case of stroke. To determine the content of such a campaign, we assessed the stroke knowledge in our population. METHODS: The stroke knowledge was studied by means of a closed-ended questionnaire. 422 randomly chosen inhabitants of Bern, Switzerland, were interviewed. RESULTS: The knowledge of stroke warning signs (WS) was classified as good in 64.7%. A good knowledge of stroke risk factors (RF) was noted in 6.4%. 4.2% knew both the WS and the RF of stroke indicating a very good global knowledge of stroke. Only 8.3% recognized TIA as symptoms of stroke resolving within 24 hours, and only 2.8% identified TIA as a disease requiring immediate medical help. In multivariate analysis being a woman, advancing age, and having an afflicted relative were associated with a good knowledge of WS (p = 0.048, p < 0.001 and p = 0.043). Good knowledge of RF was related to university education (p < 0.001). The good knowledge of TIA did not depend on age, sex, level of education or having an afflicted relative. CONCLUSIONS: The study brings to light relevant deficits of stroke knowledge in our population. A small number of participants could recognize TIA as stroke related symptoms resolving completely within 24 hours. Only a third of the surveyed persons would seek immediate medical help in case of TIA. The information obtained will be used in the development of future educational campaigns.
Resumo:
Contracts paying a guaranteed minimum rate of return and a fraction of a positive excess rate, which is specified relative to a benchmark portfolio, are closely related to unit-linked life-insurance products and can be considered as alternatives to direct investment in the underlying benchmark. They contain an embedded power option, and the key issue is the tractable and realistic hedging of this option, in order to rigorously justify valuation by arbitrage arguments and prevent the guarantees from becoming uncontrollable liabilities to the issuer. We show how to determine the contract parameters conservatively and implement robust risk-management strategies.
Resumo:
Meat and meat products can be contaminated with different species of bacteria resistant to various antimicrobials. The human health risk of a type of meat or meat product carry by emerging antimicrobial resistance depends on (i) the prevalence of contamination with resistant bacteria, (ii) the human health consequences of an infection with a specific bacterium resistant to a specific antimicrobial and (iii) the consumption volume of a specific product. The objective of this study was to compare the risk for consumers arising from their exposure to antibiotic resistant bacteria from meat of four different types (chicken, pork, beef and veal), distributed in four different product categories (fresh meat, frozen meat, dried raw meat products and heat-treated meat products). A semi-quantitative risk assessment model, evaluating each food chain step, was built in order to get an estimated score for the prevalence of Campylobacter spp., Enterococcus spp. and Escherichia coli in each product category. To assess human health impact, nine combinations of bacterial species and antimicrobial agents were considered based on a published risk profile. The combination of the prevalence at retail, the human health impact and the amount of meat or product consumed, provided the relative proportion of total risk attributed to each category of product, resulting in a high, medium or low human health risk. According to the results of the model, chicken (mostly fresh and frozen meat) contributed 6.7% of the overall risk in the highest category and pork (mostly fresh meat and dried raw meat products) contributed 4.0%. The contribution of beef and veal was of 0.4% and 0.1% respectively. The results were tested and discussed for single parameter changes of the model. This risk assessment was a useful tool for targeting antimicrobial resistance monitoring to those meat product categories where the expected risk for public health was greater.
Resumo:
OBJECTIVES: To assess health care utilisation for patients co-infected with TB and HIV (TB-HIV), and to develop a weighted health care index (HCI) score based on commonly used interventions and compare it with patient outcome. METHODS: A total of 1061 HIV patients diagnosed with TB in four regions, Central/Northern, Southern and Eastern Europe and Argentina, between January 2004 and December 2006 were enrolled in the TB-HIV study. A weighted HCI score (range 0–5), based on independent prognostic factors identified in multivariable Cox models and the final score, included performance of TB drug susceptibility testing (DST), an initial TB regimen containing a rifamycin, isoniazid and pyrazinamide, and start of combination antiretroviral treatment (cART). RESULTS: The mean HCI score was highest in Central/Northern Europe (3.2, 95%CI 3.1–3.3) and lowest in Eastern Europe (1.6, 95%CI 1.5–1.7). The cumulative probability of death 1 year after TB diagnosis decreased from 39% (95%CI 31–48) among patients with an HCI score of 0, to 9% (95%CI 6–13) among those with a score of ≥4. In an adjusted Cox model, a 1-unit increase in the HCI score was associated with 27% reduced mortality (relative hazard 0.73, 95%CI 0.64–0.84). CONCLUSIONS: Our results suggest that DST, standard anti-tuberculosis treatment and early cART may improve outcome for TB-HIV patients. The proposed HCI score provides a tool for future research and monitoring of the management of TB-HIV patients. The highest HCI score may serve as a benchmark to assess TB-HIV management, encouraging continuous health care improvement.
Resumo:
Background Persons infected with human immunodeficiency virus (HIV) have increased rates of coronary artery disease (CAD). The relative contribution of genetic background, HIV-related factors, antiretroviral medications, and traditional risk factors to CAD has not been fully evaluated in the setting of HIV infection. Methods In the general population, 23 common single-nucleotide polymorphisms (SNPs) were shown to be associated with CAD through genome-wide association analysis. Using the Metabochip, we genotyped 1875 HIV-positive, white individuals enrolled in 24 HIV observational studies, including 571 participants with a first CAD event during the 9-year study period and 1304 controls matched on sex and cohort. Results A genetic risk score built from 23 CAD-associated SNPs contributed significantly to CAD (P = 2.9×10−4). In the final multivariable model, participants with an unfavorable genetic background (top genetic score quartile) had a CAD odds ratio (OR) of 1.47 (95% confidence interval [CI], 1.05–2.04). This effect was similar to hypertension (OR = 1.36; 95% CI, 1.06–1.73), hypercholesterolemia (OR = 1.51; 95% CI, 1.16–1.96), diabetes (OR = 1.66; 95% CI, 1.10–2.49), ≥1 year lopinavir exposure (OR = 1.36; 95% CI, 1.06–1.73), and current abacavir treatment (OR = 1.56; 95% CI, 1.17–2.07). The effect of the genetic risk score was additive to the effect of nongenetic CAD risk factors, and did not change after adjustment for family history of CAD. Conclusions In the setting of HIV infection, the effect of an unfavorable genetic background was similar to traditional CAD risk factors and certain adverse antiretroviral exposures. Genetic testing may provide prognostic information complementary to family history of CAD.
Resumo:
BACKGROUND Empirical research has illustrated an association between study size and relative treatment effects, but conclusions have been inconsistent about the association of study size with the risk of bias items. Small studies give generally imprecisely estimated treatment effects, and study variance can serve as a surrogate for study size. METHODS We conducted a network meta-epidemiological study analyzing 32 networks including 613 randomized controlled trials, and used Bayesian network meta-analysis and meta-regression models to evaluate the impact of trial characteristics and study variance on the results of network meta-analysis. We examined changes in relative effects and between-studies variation in network meta-regression models as a function of the variance of the observed effect size and indicators for the adequacy of each risk of bias item. Adjustment was performed both within and across networks, allowing for between-networks variability. RESULTS Imprecise studies with large variances tended to exaggerate the effects of the active or new intervention in the majority of networks, with a ratio of odds ratios of 1.83 (95% CI: 1.09,3.32). Inappropriate or unclear conduct of random sequence generation and allocation concealment, as well as lack of blinding of patients and outcome assessors, did not materially impact on the summary results. Imprecise studies also appeared to be more prone to inadequate conduct. CONCLUSIONS Compared to more precise studies, studies with large variance may give substantially different answers that alter the results of network meta-analyses for dichotomous outcomes.
Resumo:
INFLUENCE OF ANCHORING ON MISCARRIAGE RISK PERCEPTION ASSOCIATED WITH AMNIOCENTESIS Publication No. ___________ Regina Nuccio, BS Supervisory Professor: Claire N. Singletary, MS, CGC Amniocentesis is the most common invasive procedure performed during pregnancy (Eddleman, et al., 2006). One important factor that women consider when making a decision about amniocentesis is the risk of miscarriage associated with the procedure. People use heuristics such as anchoring, the action of using a prior belief regarding the magnitude of risk as a frame of reference for new information to be synthesized, to better understand risks that they encounter in their lives. This study aimed to determine a woman’s perception of miscarriage risk associated with amniocentesis before and after a genetic counseling session and to determine what factors are most likely to anchor a woman’s perception of miscarriage risk associated with amniocentesis. Most women perceived the risk as low or average pre-counseling and were likely to indicate the numeric risk of amniocentesis as <1% risk. A higher percentage of patients correctly identified the numeric risk as <1% post-counseling when compared to pre-counseling. However, the majority of patients’ feeling about the risk perception did not change after the genetic counseling session (60%), regardless of how they perceived the risk before discussing amniocentesis with a genetic counselor. Those whose risk perception did change after discussing amniocentesis with a genetic counselor showed a decreased risk perception (p<0.0001). Of the multitude of factors studied, only two showed significance: having a friend or relative with a personal or family history of a genetic disorder was associated with a lower risk perception (p=0.001) and having a child already was associated with a lower risk perception (p=0.038). The lack of significant factors may reflect the uniqueness of each patient’s heuristic framework and reinforces the importance of genetic counseling to elucidate individual concerns.
Resumo:
Many persons in the U.S. gain weight during young adulthood, and the prevalence of obesity has been increasing among young adults. Although obesity and physical inactivity are generally recognized as risk factors for coronary heart disease (CHD), the magnitude of their effect on risk may have been seriously underestimated due to failure to adequately handle the problem of cigarette smoking. Since cigarette smoking causes weight loss, physically inactive cigarette smokers may remain relatively lean because they smoke cigarettes. We hypothesize cigarette smoking modifies the association between weight gain during young adulthood and risk of coronary heart disease during middle age, and that the true effect of weight gain during young adulthood on risk of CHD can be assessed only in persons who have not smoked cigarettes. Specifically, we hypothesize that weight gain during young adulthood is positively associated with risk of CHD during middle-age in nonsmokers but that the association is much smaller or absent entirely among cigarette smokers. The purpose of this study was to test this hypothesis. The population for analysis was comprised of 1,934 middle-aged, employed men whose average age at the baseline examination was 48.7 years. Information collected at the baseline examinations in 1958 and 1959 included recalled weight at age 20, present weight, height, smoking status, and other CHD risk factors. To decrease the effect of intraindividual variation, the mean values of the 1958 and 1959 baseline examinations were used in analyses. Change in body mass index ($\Delta$BMI) during young adulthood was the primary exposure variable and was measured as BMI at baseline (kg/m$\sp2)$ minus BMI at age 20 (kg/m$\sp2).$ Proportional hazards regression analysis was used to generate relative risks of CHD mortality by category of $\Delta$BMI and cigarette smoking status after adjustment for age, family history of CVD, major organ system disease, BMI at age 20, and number of cigarettes smoked per day. Adjustment was not performed for systolic blood pressure or total serum cholesterol as these were regarded as intervening variables. Vital status was known for all men on the 25th anniversary of their baseline examinations. 705 deaths (including 319 CHD deaths) occurred over 40,136 person-years of experience. $\Delta$BMI was positively associated with risk of CHD mortality in never-smokers, but not in ever-smokers (p for interaction = 0.067). For never-smokers with $\Delta$BMI of stable, low gain, moderate gain, and high gain, adjusted relative risks were 1.00, 1.62, 1.61, and 2.78, respectively (p for trend = 0.010). For ever-smokers, with $\Delta$BMI of stable, low gain, moderate gain, and high gain, adjusted relative risks were 1.00, 0.74, 1.07, and 1.06, respectively (p for trend = 0.422). These results support the research hypothesis that cigarette smoking modifies the association between weight gain and CHD mortality. Current estimates of the magnitude of effect of obesity and physical inactivity on risk of coronary mortality may have been seriously underestimated due to inadequate handling of cigarette smoking. ^
Resumo:
BACKGROUND The objective of the present investigation is to assess the baseline mortality-adjusted 10-year survival of rectal cancer patients. METHODS Ten-year survival was analyzed in 771 consecutive American Joint Committee on Cancer (AJCC) stage I-IV rectal cancer patients undergoing open resection between 1991 and 2008 using risk-adjusted Cox proportional hazard regression models adjusting for population-based baseline mortality. RESULTS The median follow-up of patients alive was 8.8 years. The 10-year relative, overall, and cancer-specific survival were 66.5% [95% confidence interval (CI) 61.3-72.1], 48.7% (95% CI 44.9-52.8), and 66.4% (95% CI 62.5-70.5), respectively. In the entire patient sample (stage I-IV) 47.3% and in patients with stage I-III 33.6 % of all deaths were related to rectal cancer during the 10-year period. For patients with AJCC stage I rectal cancer, the 10-year overall survival was 96% and did not significantly differ from an average population after matching for gender, age, and calendar year (p = 0.151). For the more advanced tumor stages, however, survival was significantly impaired (p < 0.001). CONCLUSIONS Retrospective investigations of survival after rectal cancer resection should adjust for baseline mortality because a large fraction of deaths is not cancer related. Stage I rectal cancer patients, compared to patients with more advanced disease stages, have a relative survival close to 100% and can thus be considered cured. Using this relative-survival approach, the real public health burden caused by rectal cancer can reliably be analyzed and reported.
Resumo:
This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.
Resumo:
BACKGROUND Observational studies of a putative association between hormonal contraception (HC) and HIV acquisition have produced conflicting results. We conducted an individual participant data (IPD) meta-analysis of studies from sub-Saharan Africa to compare the incidence of HIV infection in women using combined oral contraceptives (COCs) or the injectable progestins depot-medroxyprogesterone acetate (DMPA) or norethisterone enanthate (NET-EN) with women not using HC. METHODS AND FINDINGS Eligible studies measured HC exposure and incident HIV infection prospectively using standardized measures, enrolled women aged 15-49 y, recorded ≥15 incident HIV infections, and measured prespecified covariates. Our primary analysis estimated the adjusted hazard ratio (aHR) using two-stage random effects meta-analysis, controlling for region, marital status, age, number of sex partners, and condom use. We included 18 studies, including 37,124 women (43,613 woman-years) and 1,830 incident HIV infections. Relative to no HC use, the aHR for HIV acquisition was 1.50 (95% CI 1.24-1.83) for DMPA use, 1.24 (95% CI 0.84-1.82) for NET-EN use, and 1.03 (95% CI 0.88-1.20) for COC use. Between-study heterogeneity was mild (I2 < 50%). DMPA use was associated with increased HIV acquisition compared with COC use (aHR 1.43, 95% CI 1.23-1.67) and NET-EN use (aHR 1.32, 95% CI 1.08-1.61). Effect estimates were attenuated for studies at lower risk of methodological bias (compared with no HC use, aHR for DMPA use 1.22, 95% CI 0.99-1.50; for NET-EN use 0.67, 95% CI 0.47-0.96; and for COC use 0.91, 95% CI 0.73-1.41) compared to those at higher risk of bias (pinteraction = 0.003). Neither age nor herpes simplex virus type 2 infection status modified the HC-HIV relationship. CONCLUSIONS This IPD meta-analysis found no evidence that COC or NET-EN use increases women's risk of HIV but adds to the evidence that DMPA may increase HIV risk, underscoring the need for additional safe and effective contraceptive options for women at high HIV risk. A randomized controlled trial would provide more definitive evidence about the effects of hormonal contraception, particularly DMPA, on HIV risk.
Resumo:
Conventional risk assessments for crop protection chemicals compare the potential for causing toxicity (hazard identification) to anticipated exposure. New regulatory approaches have been proposed that would exclude exposure assessment and just focus on hazard identification based on endocrine disruption. This review comprises a critical analysis of hazard, focusing on the relative sensitivity of endocrine and non-endocrine endpoints, using a class of crop protection chemicals, the azole fungicides. These were selected because they are widely used on important crops (e.g. grains) and thereby can contact target and non-target plants and enter the food chain of humans and wildlife. Inhibition of lanosterol 14α-demethylase (CYP51) mediates the antifungal effect. Inhibition of other CYPs, such as aromatase (CYP19), can lead to numerous toxicological effects, which are also evident from high dose human exposures to therapeutic azoles. Because of its widespread use and substantial database, epoxiconazole was selected as a representative azole fungicide. Our critical analysis concluded that anticipated human exposure to epoxiconazole would yield a margin of safety of at least three orders of magnitude for reproductive effects observed in laboratory rodent studies that are postulated to be endocrine-driven (i.e. fetal resorptions). The most sensitive ecological species is the aquatic plant Lemna (duckweed), for which the margin of safety is less protective than for human health. For humans and wildlife, endocrine disruption is not the most sensitive endpoint. It is concluded that conventional risk assessment, considering anticipated exposure levels, will be protective of both human and ecological health. Although the toxic mechanisms of other azole compounds may be similar, large differences in potency will require a case-by-case risk assessment.