825 resultados para Risk Assessment Code
Resumo:
Risk management in healthcare represents a group of various complex actions, implemented to improve the quality of healthcare services and guarantee the patients safety. Risks cannot be eliminated, but it can be controlled with different risk assessment methods derived from industrial applications and among these the Failure Mode Effect and Criticality Analysis (FMECA) is a largely used methodology. The main purpose of this work is the analysis of failure modes of the Home Care (HC) service provided by local healthcare unit of Naples (ASL NA1) to focus attention on human and non human factors according to the organization framework selected by WHO. © Springer International Publishing Switzerland 2014.
Resumo:
The aim of the case study is to express the delayed repair time impact on the revenues and profit in numbers with the example of the outage of power plant units. Main steps of risk assessment: • creating project plan suitable for risk assessment • identification of the risk factors for each project activities • scenario-analysis based evaluation of risk factors • selection of the critical risk factors based on the results of quantitative risk analysis • formulating risk response actions for the critical risks • running Monte-Carlo simulation [1] using the results of scenario-analysis • building up a macro which creates the connection among the results of the risk assessment, the production plan and the business plan.
Resumo:
The present study identified and compared Coronary Heart Disease (CHD) risk factors quantified as “CHD risk point standards” (CHDRPS) among tri-ethnic (White non-Hispanic [WNH], Hispanic [H], and Black non-Hispanic [BNH]) college students. All 300 tri-ethnic subjects completed the Cardiovascular Risk Assessment Instruments and had blood pressure readings recorded on three occasions. The Bioelectrical Impedance Analysis (BIA) was used to measure body composition. Students' knowledge of CHD risk factors was also measured. In addition, a 15 ml fasting blood sample was collected from 180 subjects and blood lipids and Homocysteine (tHcy) levels were measured. Data were analyzed by gender and ethnicity using one-way Analysis of Variance (ANOVA) with Bonferroni's pairwise mean comparison procedure, Pearson correlation, and Chi-square test with follow-up Bonferroni's Chi-square tests. ^ The mean score of CHDRPS for all subjects was 19.15 ± 6.79. Assigned to the CHD risk category, college students were below-average risk of developing CHD. Males scored significantly (p < 0.013) higher for CHD risk than females, and BNHs scored significantly (p < 0.033) higher than WNHs. High consumption of dietary fat saturated fat and cholesterol resulted in a high CHDRPS among H males and females and WNH females. High alcohol consumption resulted in a high CHDRPS among all subjects. Mean tHcy ± SD of all subjects was 6.33 ± 3. 15 μmol/L. Males had significantly (p < 0.001) higher tHcy than females. Black non-Hispanic females and H females had significantly (p < 0.003) lower tHcy than WNH females. Positive associations were found between tHcy levels and CHDRPS among females (p < 0.001), Hs (p < 0.001), H males (p < 0.049), H females (p < 0.009), and BNH females (p < 0.005). Significant positive correlations were found between BMI levels and CHDRPS in males (p < 0.001), females (p < 0.001), WNHs (p < 0.008), Hs (p < 0.001), WNH males (p < 0.024), H males (p < 0.004) and H females (p < 0.001). The mean knowledge of CHD questions of all subjects was 71.70 ± 7.92 out of 100. The mean knowledge of CHD was significantly higher for WNH males (p < 0.039) than BNH males. A significant inverse correlation (r = 0.392, p < 0.032) was found between the CHD knowledge and CHDRPS in WNH females. The researcher's findings indicate strong gender and ethnic differences in CHD risk factors among the college-age population. ^
Resumo:
A major consequence of contamination at the local level’s population as it relates to environmental health and environmental engineering is childhood lead poisoning. Environmental contamination is one of the pressing environmental concerns facing the world today. Current approaches often focus on large contaminated industrial size sites that are designated by regulatory agencies for site remediation. Prior to this study, there were no known published studies conducted at the local and smaller scale, such as neighborhoods, where often much of the contamination is present to remediate. An environmental health study of local lead-poisoning data in Liberty City, Little Haiti and eastern Little Havana in Miami-Dade County, Florida accounted for a disproportionately high number of the county’s reported childhood lead poisoning cases. An engineering system was developed and designed for a comprehensive risk management methodology that is distinctively applicable to the geographical and environmental conditions of Miami-Dade County, Florida. Furthermore, a scientific approach for interpreting environmental health concerns, while involving detailed environmental engineering control measures and methods for site remediation in contained media was developed for implementation. Test samples were obtained from residents and sites in those specific communities in Miami-Dade County, Florida (Gasana and Chamorro 2002). Currently lead does not have an Oral Assessment, Inhalation Assessment, and Oral Slope Factor; variables that are required to run a quantitative risk assessment. However, various institutional controls from federal agencies’ standards and regulation for contaminated lead in media yield adequate maximum concentration limits (MCLs). For this study an MCL of .0015 (mg/L) was used. A risk management approach concerning contaminated media involving lead demonstrates that the linkage of environmental health and environmental engineering can yield a feasible solution.
Resumo:
Subtitle D of the Resource Conservation and Recovery Act (RCRA) requires a post closure period of 30 years for non hazardous wastes in landfills. Post closure care (PCC) activities under Subtitle D include leachate collection and treatment, groundwater monitoring, inspection and maintenance of the final cover, and monitoring to ensure that landfill gas does not migrate off site or into on site buildings. The decision to reduce PCC duration requires exploration of a performance based methodology to Florida landfills. PCC should be based on whether the landfill is a threat to human health or the environment. Historically no risk based procedure has been available to establish an early end to PCC. Landfill stability depends on a number of factors that include variables that relate to operations both before and after the closure of a landfill cell. Therefore, PCC decisions should be based on location specific factors, operational factors, design factors, post closure performance, end use, and risk analysis. The question of appropriate PCC period for Florida’s landfills requires in depth case studies focusing on the analysis of the performance data from closed landfills in Florida. Based on data availability, Davie Landfill was identified as case study site for a case by case analysis of landfill stability. The performance based PCC decision system developed by Geosyntec Consultants was used for the assessment of site conditions to project PCC needs. The available data for leachate and gas quantity and quality, ground water quality, and cap conditions were evaluated. The quality and quantity data for leachate and gas were analyzed to project the levels of pollutants in leachate and groundwater in reference to maximum contaminant level (MCL). In addition, the projected amount of gas quantity was estimated. A set of contaminants (including metals and organics) were identified as contaminants detected in groundwater for health risk assessment. These contaminants were selected based on their detection frequency and levels in leachate and ground water; and their historical and projected trends. During the evaluations a range of discrepancies and problems that related to the collection and documentation were encountered and possible solutions made. Based on the results of PCC performance integrated with risk assessment, projection of future PCC monitoring needs and sustainable waste management options were identified. According to these results, landfill gas monitoring can be terminated, leachate and groundwater monitoring for parameters above MCL and surveying of the cap integrity should be continued. The parameters which cause longer monitoring periods can be eliminated for the future sustainable landfills. As a conclusion, 30 year PCC period can be reduced for some of the landfill components based on their potential impacts to human health and environment (HH&E).
Resumo:
Subtitle D of the Resource Conservation and Recovery Act (RCRA) requires a post closure period of 30 years for non hazardous wastes in landfills. Post closure care (PCC) activities under Subtitle D include leachate collection and treatment, groundwater monitoring, inspection and maintenance of the final cover, and monitoring to ensure that landfill gas does not migrate off site or into on site buildings. The decision to reduce PCC duration requires exploration of a performance based methodology to Florida landfills. PCC should be based on whether the landfill is a threat to human health or the environment. Historically no risk based procedure has been available to establish an early end to PCC. Landfill stability depends on a number of factors that include variables that relate to operations both before and after the closure of a landfill cell. Therefore, PCC decisions should be based on location specific factors, operational factors, design factors, post closure performance, end use, and risk analysis. The question of appropriate PCC period for Florida’s landfills requires in depth case studies focusing on the analysis of the performance data from closed landfills in Florida. Based on data availability, Davie Landfill was identified as case study site for a case by case analysis of landfill stability. The performance based PCC decision system developed by Geosyntec Consultants was used for the assessment of site conditions to project PCC needs. The available data for leachate and gas quantity and quality, ground water quality, and cap conditions were evaluated. The quality and quantity data for leachate and gas were analyzed to project the levels of pollutants in leachate and groundwater in reference to maximum contaminant level (MCL). In addition, the projected amount of gas quantity was estimated. A set of contaminants (including metals and organics) were identified as contaminants detected in groundwater for health risk assessment. These contaminants were selected based on their detection frequency and levels in leachate and ground water; and their historical and projected trends. During the evaluations a range of discrepancies and problems that related to the collection and documentation were encountered and possible solutions made. Based on the results of PCC performance integrated with risk assessment, projection of future PCC monitoring needs and sustainable waste management options were identified. According to these results, landfill gas monitoring can be terminated, leachate and groundwater monitoring for parameters above MCL and surveying of the cap integrity should be continued. The parameters which cause longer monitoring periods can be eliminated for the future sustainable landfills. As a conclusion, 30 year PCC period can be reduced for some of the landfill components based on their potential impacts to human health and environment (HH&E).
Resumo:
Family health history (FHH) in the context of risk assessment has been shown to positively impact risk perception and behavior change. The added value of genetic risk testing is less certain. The aim of this study was to determine the impact of Type 2 Diabetes (T2D) FHH and genetic risk counseling on behavior and its cognitive precursors. Subjects were non-diabetic patients randomized to counseling that included FHH +/- T2D genetic testing. Measurements included weight, BMI, fasting glucose at baseline and 12 months and behavioral and cognitive precursor (T2D risk perception and control over disease development) surveys at baseline, 3, and 12 months. 391 subjects enrolled of which 312 completed the study. Behavioral and clinical outcomes did not differ across FHH or genetic risk but cognitive precursors did. Higher FHH risk was associated with a stronger perceived T2D risk (pKendall < 0.001) and with a perception of "serious" risk (pKendall < 0.001). Genetic risk did not influence risk perception, but was correlated with an increase in perception of "serious" risk for moderate (pKendall = 0.04) and average FHH risk subjects (pKendall = 0.01), though not for the high FHH risk group. Perceived control over T2D risk was high and not affected by FHH or genetic risk. FHH appears to have a strong impact on cognitive precursors of behavior change, suggesting it could be leveraged to enhance risk counseling, particularly when lifestyle change is desirable. Genetic risk was able to alter perceptions about the seriousness of T2D risk in those with moderate and average FHH risk, suggesting that FHH could be used to selectively identify individuals who may benefit from genetic risk testing.
Resumo:
OBJECTIVE: The Thrombolysis in Myocardial Infarction (TIMI) score is a validated tool for risk stratification of acute coronary syndrome. We hypothesized that the TIMI risk score would be able to risk stratify patients in observation unit for acute coronary syndrome. METHODS: STUDY DESIGN: Retrospective cohort study of consecutive adult patients placed in an urban academic hospital emergency department observation unit with an average annual census of 65,000 between 2004 and 2007. Exclusion criteria included elevated initial cardiac biomarkers, ST segment changes on ECG, unstable vital signs, or unstable arrhythmias. A composite of significant coronary artery disease (CAD) indicators, including diagnosis of myocardial infarction, percutaneous coronary intervention, coronary artery bypass surgery, or death within 30 days and 1 year, were abstracted via chart review and financial record query. The entire cohort was stratified by TIMI risk scores (0-7) and composite event rates with 95% confidence interval were calculated. RESULTS: In total 2228 patients were analyzed. Average age was 54.5 years, 42.0% were male. The overall median TIMI risk score was 1. Eighty (3.6%) patients had 30-day and 119 (5.3%) had 1-year CAD indicators. There was a trend toward increasing rate of composite CAD indicators at 30 days and 1 year with increasing TIMI score, ranging from a 1.2% event rate at 30 days and 1.9% at 1 year for TIMI score of 0 and 12.5% at 30 days and 21.4% at 1 year for TIMI ≥ 4. CONCLUSIONS: In an observation unit cohort, the TIMI risk score is able to risk stratify patients into low-, moderate-, and high-risk groups.
Resumo:
Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008-11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40-65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk.
Resumo:
PURPOSE: To evaluate the addition of cetuximab to neoadjuvant chemotherapy before chemoradiotherapy in high-risk rectal cancer. PATIENTS AND METHODS: Patients with operable magnetic resonance imaging-defined high-risk rectal cancer received four cycles of capecitabine/oxaliplatin (CAPOX) followed by capecitabine chemoradiotherapy, surgery, and adjuvant CAPOX (four cycles) or the same regimen plus weekly cetuximab (CAPOX+C). The primary end point was complete response (CR; pathologic CR or, in patients not undergoing surgery, radiologic CR) in patients with KRAS/BRAF wild-type tumors. Secondary end points were radiologic response (RR), progression-free survival (PFS), overall survival (OS), and safety in the wild-type and overall populations and a molecular biomarker analysis. RESULTS: One hundred sixty-five eligible patients were randomly assigned. Ninety (60%) of 149 assessable tumors were KRAS or BRAF wild type (CAPOX, n = 44; CAPOX+C, n = 46), and in these patients, the addition of cetuximab did not improve the primary end point of CR (9% v 11%, respectively; P = 1.0; odds ratio, 1.22) or PFS (hazard ratio [HR], 0.65; P = .363). Cetuximab significantly improved RR (CAPOX v CAPOX+C: after chemotherapy, 51% v 71%, respectively; P = .038; after chemoradiation, 75% v 93%, respectively; P = .028) and OS (HR, 0.27; P = .034). Skin toxicity and diarrhea were more frequent in the CAPOX+C arm. CONCLUSION: Cetuximab led to a significant increase in RR and OS in patients with KRAS/BRAF wild-type rectal cancer, but the primary end point of improved CR was not met.
Resumo:
AIMS: Our aims were to evaluate the distribution of troponin I concentrations in population cohorts across Europe, to characterize the association with cardiovascular outcomes, to determine the predictive value beyond the variables used in the ESC SCORE, to test a potentially clinically relevant cut-off value, and to evaluate the improved eligibility for statin therapy based on elevated troponin I concentrations retrospectively.
METHODS AND RESULTS: Based on the Biomarkers for Cardiovascular Risk Assessment in Europe (BiomarCaRE) project, we analysed individual level data from 10 prospective population-based studies including 74 738 participants. We investigated the value of adding troponin I levels to conventional risk factors for prediction of cardiovascular disease by calculating measures of discrimination (C-index) and net reclassification improvement (NRI). We further tested the clinical implication of statin therapy based on troponin concentration in 12 956 individuals free of cardiovascular disease in the JUPITER study. Troponin I remained an independent predictor with a hazard ratio of 1.37 for cardiovascular mortality, 1.23 for cardiovascular disease, and 1.24 for total mortality. The addition of troponin I information to a prognostic model for cardiovascular death constructed of ESC SCORE variables increased the C-index discrimination measure by 0.007 and yielded an NRI of 0.048, whereas the addition to prognostic models for cardiovascular disease and total mortality led to lesser C-index discrimination and NRI increment. In individuals above 6 ng/L of troponin I, a concentration near the upper quintile in BiomarCaRE (5.9 ng/L) and JUPITER (5.8 ng/L), rosuvastatin therapy resulted in higher absolute risk reduction compared with individuals <6 ng/L of troponin I, whereas the relative risk reduction was similar.
CONCLUSION: In individuals free of cardiovascular disease, the addition of troponin I to variables of established risk score improves prediction of cardiovascular death and cardiovascular disease.
Resumo:
OBJECTIVE: To determine risk of Down syndrome (DS) in multiple relative to singleton pregnancies, and compare prenatal diagnosis rates and pregnancy outcome.
DESIGN: Population-based prevalence study based on EUROCAT congenital anomaly registries.
SETTING: Eight European countries.
POPULATION: 14.8 million births 1990-2009; 2.89% multiple births.
METHODS: DS cases included livebirths, fetal deaths from 20 weeks, and terminations of pregnancy for fetal anomaly (TOPFA). Zygosity is inferred from like/unlike sex for birth denominators, and from concordance for DS cases.
MAIN OUTCOME MEASURES: Relative risk (RR) of DS per fetus/baby from multiple versus singleton pregnancies and per pregnancy in monozygotic/dizygotic versus singleton pregnancies. Proportion of prenatally diagnosed and pregnancy outcome.
STATISTICAL ANALYSIS: Poisson and logistic regression stratified for maternal age, country and time.
RESULTS: Overall, the adjusted (adj) RR of DS for fetus/babies from multiple versus singleton pregnancies was 0.58 (95% CI 0.53-0.62), similar for all maternal ages except for mothers over 44, for whom it was considerably lower. In 8.7% of twin pairs affected by DS, both co-twins were diagnosed with the condition. The adjRR of DS for monozygotic versus singleton pregnancies was 0.34 (95% CI 0.25-0.44) and for dizygotic versus singleton pregnancies 1.34 (95% CI 1.23-1.46). DS fetuses from multiple births were less likely to be prenatally diagnosed than singletons (adjOR 0.62 [95% CI 0.50-0.78]) and following diagnosis less likely to be TOPFA (adjOR 0.40 [95% CI 0.27-0.59]).
CONCLUSIONS: The risk of DS per fetus/baby is lower in multiple than singleton pregnancies. These estimates can be used for genetic counselling and prenatal screening.
Resumo:
Evidence of an association between early pregnancy exposure to selective serotonin reuptake inhibitors (SSRI) and congenital heart defects (CHD) has contributed to recommendations to weigh benefits and risks carefully. The objective of this study was to determine the specificity of association between first trimester exposure to SSRIs and specific CHD and other congenital anomalies (CA) associated with SSRI exposure in the literature (signals). A population-based case-malformed control study was conducted in 12 EUROCAT CA registries covering 2.1 million births 1995-2009 including livebirths, fetal deaths from 20 weeks gestation and terminations of pregnancy for fetal anomaly. Babies/fetuses with specific CHD (n = 12,876) and non-CHD signal CA (n = 13,024), were compared with malformed controls whose diagnosed CA have not been associated with SSRI in the literature (n = 17,083). SSRI exposure in first trimester pregnancy was associated with CHD overall (OR adjusted for registry 1.41, 95% CI 1.07-1.86, fluoxetine adjOR 1.43 95% CI 0.85-2.40, paroxetine adjOR 1.53, 95% CI 0.91-2.58) and with severe CHD (adjOR 1.56, 95% CI 1.02-2.39), particularly Tetralogy of Fallot (adjOR 3.16, 95% CI 1.52-6.58) and Ebstein's anomaly (adjOR 8.23, 95% CI 2.92-23.16). Significant associations with SSRI exposure were also found for ano-rectal atresia/stenosis (adjOR 2.46, 95% CI 1.06-5.68), gastroschisis (adjOR 2.42, 95% CI 1.10-5.29), renal dysplasia (adjOR 3.01, 95% CI 1.61-5.61), and clubfoot (adjOR 2.41, 95% CI 1.59-3.65). These data support a teratogenic effect of SSRIs specific to certain anomalies, but cannot exclude confounding by indication or associated factors.
Resumo:
Despite its huge potential in risk analysis, the Dempster–Shafer Theory of Evidence (DST) has not received enough attention in construction management. This paper presents a DST-based approach for structuring personal experience and professional judgment when assessing construction project risk. DST was innovatively used to tackle the problem of lacking sufficient information through enabling analysts to provide incomplete assessments. Risk cost is used as a common scale for measuring risk impact on the various project objectives, and the Evidential Reasoning algorithm is suggested as a novel alternative for aggregating individual assessments. A spreadsheet-based decision support system (DSS) was devised to facilitate the proposed approach. Four case studies were conducted to examine the approach's viability. Senior managers in four British construction companies tried the DSS and gave very promising feedback. The paper concludes that the proposed methodology may contribute to bridging the gap between theory and practice of construction risk assessment.