929 resultados para Risk analysis
Resumo:
Enterprise Risk Management (ERM) and Knowledge Management (KM) both encompass top-down and bottom-up approaches developing and embedding risk knowledge concepts and processes in strategy, policies, risk appetite definition, the decision-making process and business processes. The capacity to transfer risk knowledge affects all stakeholders and understanding of the risk knowledge about the enterprise's value is a key requirement in order to identify protection strategies for business sustainability. There are various factors that affect this capacity for transferring and understanding. Previous work has established that there is a difference between the influence of KM variables on Risk Control and on the perceived value of ERM. Communication among groups appears as a significant variable in improving Risk Control but only as a weak factor in improving the perceived value of ERM. However, the ERM mandate requires for its implementation a clear understanding, of risk management (RM) policies, actions and results, and the use of the integral view of RM as a governance and compliance program to support the value driven management of the organization. Furthermore, ERM implementation demands better capabilities for unification of the criteria of risk analysis, alignment of policies and protection guidelines across the organization. These capabilities can be affected by risk knowledge sharing between the RM group and the Board of Directors and other executives in the organization. This research presents an exploratory analysis of risk knowledge transfer variables used in risk management practice. A survey to risk management executives from 65 firms in various industries was undertaken and 108 answers were analyzed. Potential relationships among the variables are investigated using descriptive statistics and multivariate statistical models. The level of understanding of risk management policies and reports by the board is related to the quality of the flow of communication in the firm and perceived level of integration of the risk policy in the business processes.
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
The aim of the case study is to express the delayed repair time impact on the revenues and profit in numbers with the example of the outage of power plant units. Main steps of risk assessment: • creating project plan suitable for risk assessment • identification of the risk factors for each project activities • scenario-analysis based evaluation of risk factors • selection of the critical risk factors based on the results of quantitative risk analysis • formulating risk response actions for the critical risks • running Monte-Carlo simulation [1] using the results of scenario-analysis • building up a macro which creates the connection among the results of the risk assessment, the production plan and the business plan.
Resumo:
The exploration and development of oil and gas reserves located in harsh offshore environments are characterized with high risk. Some of these reserves would be uneconomical if produced using conventional drilling technology due to increased drilling problems and prolonged non-productive time. Seeking new ways to reduce drilling cost and minimize risks has led to the development of Managed Pressure Drilling techniques. Managed pressure drilling methods address the drawbacks of conventional overbalanced and underbalanced drilling techniques. As managed pressure drilling techniques are evolving, there are many unanswered questions related to safety and operating pressure regimes. Quantitative risk assessment techniques are often used to answer these questions. Quantitative risk assessment is conducted for the various stages of drilling operations – drilling ahead, tripping operation, casing and cementing. A diagnostic model for analyzing the rotating control device, the main component of managed pressure drilling techniques, is also studied. The logic concept of Noisy-OR is explored to capture the unique relationship between casing and cementing operations in leading to well integrity failure as well as its usage to model the critical components of constant bottom-hole pressure drilling technique of managed pressure drilling during tripping operation. Relevant safety functions and inherent safety principles are utilized to improve well integrity operations. Loss function modelling approach to enable dynamic consequence analysis is adopted to study blowout risk for real-time decision making. The aggregation of the blowout loss categories, comprising: production, asset, human health, environmental response and reputation losses leads to risk estimation using dynamically determined probability of occurrence. Lastly, various sub-models developed for the stages/sub-operations of drilling operations and the consequence modelling approach are integrated for a holistic risk analysis of drilling operations.
Resumo:
Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008-11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40-65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk.
Resumo:
This chapter sets out a comprehensive analysis of the regulation of money market funds in the EU and US. The theoretical framework has unique cases and examples and includes checklists to assist with the practice of fund management and legal risk analysis.
Resumo:
The objective of this study was to estimate the spatial distribution of work accident risk in the informal work market in the urban zone of an industrialized city in southeast Brazil and to examine concomitant effects of age, gender, and type of occupation after controlling for spatial risk variation. The basic methodology adopted was that of a population-based case-control study with particular interest focused on the spatial location of work. Cases were all casual workers in the city suffering work accidents during a one-year period; controls were selected from the source population of casual laborers by systematic random sampling of urban homes. The spatial distribution of work accidents was estimated via a semiparametric generalized additive model with a nonparametric bidimensional spline of the geographical coordinates of cases and controls as the nonlinear spatial component, and including age, gender, and occupation as linear predictive variables in the parametric component. We analyzed 1,918 cases and 2,245 controls between 1/11/2003 and 31/10/2004 in Piracicaba, Brazil. Areas of significantly high and low accident risk were identified in relation to mean risk in the study region (p < 0.01). Work accident risk for informal workers varied significantly in the study area. Significant age, gender, and occupational group effects on accident risk were identified after correcting for this spatial variation. A good understanding of high-risk groups and high-risk regions underpins the formulation of hypotheses concerning accident causality and the development of effective public accident prevention policies.
Resumo:
Part 17: Risk Analysis
Resumo:
Current procedures for flood risk estimation assume flood distributions are stationary over time, meaning annual maximum flood (AMF) series are not affected by climatic variation, land use/land cover (LULC) change, or management practices. Thus, changes in LULC and climate are generally not accounted for in policy and design related to flood risk/control, and historical flood events are deemed representative of future flood risk. These assumptions need to be re-evaluated, however, as climate change and anthropogenic activities have been observed to have large impacts on flood risk in many areas. In particular, understanding the effects of LULC change is essential to the study and understanding of global environmental change and the consequent hydrologic responses. The research presented herein provides possible causation for observed nonstationarity in AMF series with respect to changes in LULC, as well as a means to assess the degree to which future LULC change will impact flood risk. Four watersheds in the Midwest, Northeastern, and Central United States were studied to determine flood risk associated with historical and future projected LULC change. Historical single framed aerial images dating back to the mid-1950s were used along with Geographic Information Systems (GIS) and remote sensing models (SPRING and ERDAS) to create historical land use maps. The Forecasting Scenarios of Future Land Use Change (FORE-SCE) model was applied to generate future LULC maps annually from 2006 to 2100 for the conterminous U.S. based on the four IPCC-SRES future emission scenario conditions. These land use maps were input into previously calibrated Soil and Water Assessment Tool (SWAT) models for two case study watersheds. In order to isolate effects of LULC change, the only variable parameter was the Runoff Curve Number associated with the land use layer. All simulations were run with daily climate data from 1978-1999, consistent with the 'base' model which employed the 1992 NLCD to represent 'current' conditions. Output daily maximum flows were converted to instantaneous AMF series and were subsequently modeled using a Log-Pearson Type 3 (LP3) distribution to evaluate flood risk. Analysis of the progression of LULC change over the historic period and associated SWAT outputs revealed that AMF magnitudes tend to increase over time in response to increasing degrees of urbanization. This is consistent with positive trends in the AMF series identified in previous studies, although there are difficulties identifying correlations between LULC change and identified change points due to large time gaps in the generated historical LULC maps, mainly caused by unavailability of sufficient quality historic aerial imagery. Similarly, increases in the mean and median AMF magnitude were observed in response to future LULC change projections, with the tails of the distributions remaining reasonably constant. FORE-SCE scenario A2 was found to have the most dramatic impact on AMF series, consistent with more extreme projections of population growth, demands for growing energy sources, agricultural land, and urban expansion, while AMF outputs based on scenario B2 showed little changes for the future as the focus is on environmental conservation and regional solutions to environmental issues.
Resumo:
What do international non-governmental organisations (INGOs) do before and during the escalation of conflicts? The academic literature primarily focuses on these organisations' behaviour during an evident crisis rather than on how they anticipate the escalation of conflicts, assess the situation in which they find themselves, and decide on strategies to cope with the possibility of upcoming violence. Such lopsided focus persists despite calls for INGOs to become more proactive in managing their programmes and their staff members' safety. Mindful of this imbalance, the present study provides a causal explanation of how decision-makers in INGOs anticipate and react to the risk of low-level violence escalating into full-blown conflicts. This thesis aims to explain these actors' behaviour by presenting it as a two�step process involving how INGOs conduct risk assessments and how they turn these assessments into decisions. The study performs a structured, focused comparison of seven INGOs operating in South Sudan before the so-called Juba Clashes of 7 July 2016. Based on an analytical framework of INGO decision�making stemming from political risk analysis, organisational decision-making theory and conflict studies literature, the study reconstructs decision-making via process-tracing combined with mixed methods of data collection.
Resumo:
The great challenges of today pose great pressure on the food chain to provide safe and nutritious food that meets regulations and consumer health standards. In this context, Risk Analysis is used to produce an estimate of the risks to human health and to identify and implement effective risk-control measures. The aims of this work were 1) describe how QRA is used to evaluate the risk for consumers health, 2) address the methodology to obtain models to apply in QMRA; 3) evaluate solutions to mitigate the risk. The application of a QCRA to the Italian milk industry enabled the assessment of Aflatoxin M1 exposure, impact on different population categories, and comparison of risk-mitigation strategies. The results highlighted the most sensitive population categories, and how more stringent sampling plans reduced risk. The application of a QMRA to Spanish fresh cheeses evidenced how the contamination of this product with Listeria monocytogenes may generate a risk for the consumers. Two risk-mitigation actions were evaluated, i.e. reducing shelf life and domestic refrigerator temperature, both resulting effective in reducing the risk of listeriosis. A description of the most applied protocols for data generation for predictive model development, was provided to increase transparency and reproducibility and to provide the means to better QMRA. The development of a linear regression model describing the fate of Salmonella spp. in Italian salami during the production process and HPP was described. Alkaline electrolyzed water was evaluated for its potential use to reduce microbial loads on working surfaces, with results showing its effectiveness. This work showed the relevance of QRA, of predictive microbiology, and of new technologies to ensure food safety on a more integrated way. Filling of data gaps, the development of better models and the inclusion of new risk-mitigation strategies may lead to improvements in the presented QRAs.
Resumo:
L’obiettivo del lavoro di tesi è stato quello di valutare la sensitività dell’analisi quantitativa del rischio (QRA, Quantitative Risk Analysis) alla frequenza di accadimento degli scenari di rilascio delle sostanze pericolose ovvero dei top-events. L’analisi di rischio è stata condotta applicando, ad uno stabilimento a rischio di incidente rilevante della Catalogna, i passaggi procedurali previsti dall’istruzione 14/2008 SIE vigente in quella regione. L’applicazione di questa procedura ha permesso di ottenere le curve isorischio, che collegano i punti attorno allo stabilimento aventi lo stesso valore del rischio locale. Le frequenze base dei top-events sono state prese innanzitutto dalla linea guida BEVI, e successivamente ai fini dell’analisi di sensitività, sono stati considerati sia i valori forniti da altre 3 autorevoli linee guida sia i valori ottenuti considerando incrementi delle frequenze base del 20%, 30%, 50%, 70%, 200%, 300%, 500%, 700%, 1000%, 2000%, 3000%, 4000%, 5000%. L’analisi delle conseguenze è stata condotta tramite i software EFFECTS ed ALOHA, mentre ai fini della ricomposizione del rischio è stato utilizzato il codice RISKCURVES. La sensitività alle frequenze dei top-events è stata valutata in termini di variazione del diametro massimo delle curve isorischio, con particolare attenzione a quella corrispondente a 10-6 ev/anno, che rappresenta il limite di accettabilità del rischio per la pianificazione territoriale nell’intorno dello stabilimento. E’ stato così dimostrato che l’estensione delle curve isorischio dipende in maniera molto forte dalla frequenza dei top-events e che l’utilizzo di dati di frequenza provenienti da linee guida diverse porta a curve isorischio di dimensioni molto differenti. E’ dunque confermato che la scelta delle frequenze degli scenari incidentali rappresenta, nella conduzione dell’analisi di rischio, un passaggio delicato e cruciale.
Resumo:
Carbon capture and storage (CCS) represents an interesting climate mitigation option, however, as for any other human activity, there is the impelling need to assess and manage the associated risks. This study specifically addresses the marine environmental risk posed by CO2 leakages associated to CCS subsea engineering system, meant as offshore pipelines and injection / plugged and abandoned wells. The aim of this thesis work is to start approaching the development of a complete and standardized practical procedure to perform a quantified environmental risk assessment for CCS, with reference to the specific activities mentioned above. Such an effort would be of extreme relevance not only for companies willing to implement CCS, as a methodological guidance, but also, by uniformizing the ERA procedure, to begin changing people’s perception about CCS, that happens to be often discredited due to the evident lack of comprehensive and systematic methods to assess the impacts on the marine environment. The backbone structure of the framework developed consists on the integration of ERA’s main steps and those belonging to the quantified risk assessment (QRA), in the aim of quantitatively characterizing risk and describing it as a combination of magnitude of the consequences and their frequency. The framework developed by this work is, however, at a high level, as not every single aspect has been dealt with in the required detail. Thus, several alternative options are presented to be considered for use depending on the situation. Further specific studies should address their accuracy and efficiency and solve the knowledge gaps emerged, in order to establish and validate a final and complete procedure. Regardless of the knowledge gaps and uncertainties, that surely need to be addressed, this preliminary framework already finds some relevance in on field applications, as a non-stringent guidance to perform CCS ERA, and it constitutes the foundation of the final framework.
Resumo:
OBJETIVO: Avaliar se o conteúdo de auto-anticorpos anti-LDL oxidada (anti-LDLox) no plasma de adolescentes correlaciona-se com suas medidas antropométricas e com o perfil lipídico. MÉTODOS: O estudo incluiu 150 adolescentes com idade entre 10 e 15 anos, recrutados do ambulatório de obesidade da Universidade Federal de São Paulo (SP) e de escolas públicas de Piracicaba (SP). Foram avaliadas medidas antropométricas, como índice de massa corporal, circunferência de cintura e do braço, classificando os adolescentes em eutrófico, sobrepeso e obeso. Para as análises bioquímicas, foi realizado o perfil lipídico através de métodos enzimáticos colorimétricos, e para detecção do conteúdo de auto-anticorpos anti-LDLox, utilizou-se o método de ELISA. RESULTADOS: Segundo análises das variáveis antropométricas, o grupo obeso apresentou perfil alterado em relação aos grupos eutrófico e sobrepeso (p < 0,01), indicando risco cardiovascular. Quando o perfil lipídico foi avaliado, observaram-se diferenças estatisticamente significativas para as concentrações de colesterol total (p = 0,011), HDL-colesterol (p = 0,001) e LDL-colesterol (p < 0,042) nos grupos eutrófico e obeso. Para as análises de auto-anticorpos anti-LDLox plasmática, os grupos sobrepeso (p = 0,012) e obeso (p < 0,001) apresentaram valores superiores ao grupo eutrófico. Também houve correlações entre os auto-anticorpos anti-LDLox e variáveis antropométricas. CONCLUSÃO: A presença de auto-anticorpos anti-LDLox em adolescentes e as alterações metabólicas no perfil lipídico variaram de modo proporcional com parâmetros antropométricos, o que torna o conteúdo de anti-LDLox um potencial indicador bioquímico de risco para síndrome metabólica.
Resumo:
Objetivos: Examinar a prevalência da sub e supernotificação da ingestão energética em adolescentes e seus fatores associados. Métodos: Estudo transversal com 96 adolescentes na pós-puberdade (47 com peso normal e 49 obesos), com idade média de 16,6±1,3 anos. Peso e altura foram medidos e o índice de massa corporal foi calculado. A composição corporal foi avaliada através de absorciometria por raios X de dupla energia. A ingestão de alimentos foi avaliada por meio de um registro alimentar de 3 dias. Realizou-se uma avaliação bioquímica (níveis séricos de colesterol total, LDL, HDL, glicose plasmática e insulina). Os subnotificadores relataram uma ingestão energética < 1,35 x taxa metabólica basal (TMB), enquanto os supernotificadores relataram uma ingestão energética > 2,4 x TMB. Resultados: Notificação imprecisa (sub ou supernotificação) da ingestão energética foi identificada em 65,6 por cento dos adolescentes (64,6 e 1 por cento de sub e supernotificação, respectivamente). Os adolescentes obesos apresentaram 5.0 vezes mais chances de subnotificar a ingestão energética (IC95 por cento 2,0-12,7) do que os participantes com peso normal. Os subnotificadores apresentaram taxas mais altas de ingestão insuficiente de carboidratos (19,3 versus 12,1 por cento, p = 0,046) e de lipídios (11,3 versus 0 por cento, p < 0,001) do que os notificadores plausíveis. A ingestão de colesterol também foi mais baixa entre os subnotificadores (p = 0,017). Não houve diferenças significativas na composição corporal e nos parâmetros bioquímicos em relação à notificação imprecisa. Conclusões: Os resultados obtidos demonstraram alta porcentagem de notificação imprecisa da ingestão energética entre adolescentes, principalmente entre os obesos, o que sugere que os valores de consumo de nutrientes ajustado para o consumo de energia deveriam ser empregados na análise de risco da relação dieta-doença a fim de contribuir para a redução de erros associados à notificação imprecisa