955 resultados para Multivariate risk model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project explored self-regulation among children impacted by leaming disabilities. More specifically, this thesis examined whether a remedial literacy program called Reading Rocks! offered by the Leaming Disabilities Association of Niagara Region, provided participating children opportunities to set goals, develop strategies to meet these goals, and provide intemal and extemal feedback- all processes associated with a model of self-regulated leaming as pioneered by Butler and Winne (1995) and Winne and Hadwin (1999). In this thesis, I triangulate the data through the combination of three different methodologies. Firstly, I describe the various elements of the Reading Rocks! program. Secondly, I analyze the data gathered through three semi-structured interviews with three parents of children that participated in the Reading Rocks! program to demonstrate whether the program provides opportunities for children to self-regulate their learning. Thirdly, I also analyze photographic evidence of the motivational workstation boards created by the tutors and children to further illustrate how Reading Rocks! promotes self-regulatory processes among children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People with intellectual disability who sexually offend commonly live in community-based settings since the closing of all institutions across the province of Ontario. Nine (n=9) front line staff who provide support to these individuals in three different settings (treatment setting, transitional setting, residential setting) were interviewed. Participants responded to 47 questions to explore how sex offenders with intellectual disability can be supported in the community to prevent re-offenses. Questions encompassed variables that included staff attitudes, various factors impacting support, structural components of the setting, quality of life and the good life, staff training, staff perspectives on treatment, and understanding of risk management. Three overlapping models that have been supported in the literature were used collectively for the basis of this research: The Good Lives Model (Ward & Gannon, 2006; Ward et al., 2007), the quality of life model (Felce & Perry, 1995), and variables associated with risk management. Results of this research showed how this population is being supported in the community with an emphasis on the following elements: positive and objective staff attitude, teamwork, clear rules and protocols, ongoing supervision, consistency, highly trained staff, and environments that promote quality of life. New concepts arose which suggested that all settings display an unequal balance of upholding human rights and managing risks when supporting this high-risk population. This highlights the need for comprehensive assessments in order to match the offender to the proper setting and supports, using an integration of a Risk, Need, Responsivity model and the Good Lives model for offender rehabilitation and to reduce the likelihood of re-offenses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Feedback-Related Negativity (FRN) is thought to reflect the dopaminergic prediction error signal from the subcortical areas to the ACC (i.e., a bottom-up signal). Two studies were conducted in order to test a new model of FRN generation, which includes direct modulating influences of medial PFC (i.e., top-down signals) on the ACC at the time of the FRN. Study 1 examined the effects of one’s sense of control (top-down) and of informative cues (bottom-up) on the FRN measures. In Study 2, sense of control and instruction-based (top-down) and probability-based expectations (bottom-up) were manipulated to test the proposed model. The results suggest that any influences of medial PFC on the activity of the ACC that occur in the context of incentive tasks are not direct. The FRN was shown to be sensitive to salient stimulus characteristics. The results of this dissertation partially support the reinforcement learning theory, in that the FRN is a marker for prediction error signal from subcortical areas. However, the pattern of results outlined here suggests that prediction errors are based on salient stimulus characteristics and are not reward specific. A second goal of this dissertation was to examine whether ACC activity, measured through the FRN, is altered in individuals at-risk for problem-gambling behaviour (PG). Individuals in this group were more sensitive to the valence of the outcome in a gambling task compared to not at-risk individuals, suggesting that gambling contexts increase the sensitivity of the reward system to valence of the outcome in individuals at risk for PG. Furthermore, at-risk participants showed an increased sensitivity to reward characteristics and a decreased response to loss outcomes. This contrasts with those not at risk whose FRNs were sensitive to losses. As the results did not replicate previous research showing attenuated FRNs in pathological gamblers, it is likely that the size and time of the FRN does not change gradually with increasing risk of maladaptive behaviour. Instead, changes in ACC activity reflected by the FRN in general can be observed only after behaviour becomes clinically maladaptive or through comparison between different types of gain/loss outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study aim was to investigate the relationship between factors related to personal cancer history and lung cancer risk as well as assess their predictive utility. Characteristics of interest included the number, anatomical site(s), and age of onset of previous cancer(s). Data from the Prostate, Lung, Colorectal and Ovarian Screening (PLCO) Cancer Screening Trial (N = 154,901) and National Lung Screening Trial (N = 53,452) were analysed. Logistic regression models were used to assess the relationships between each variable of interest and 6-year lung cancer risk. Predictive utility was assessed through changes in area-under-the-curve (AUC) after substitution into the PLCOall2014 lung cancer risk prediction model. Previous lung, uterine and oral cancers were strongly and significantly associated with elevated 6-year lung cancer risk after controlling for confounders. None of these refined measures of personal cancer history offered more predictive utility than the simple (yes/no) measure already included in the PLCOall2014 model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite being considered a disease of smokers, approximately 10-15% of lung cancer cases occur in never-smokers. Lung cancer risk prediction models have demonstrated excellent ability to discriminate cases from non-cases, and have been shown to be more efficient at selecting individuals for future screening than current criteria. Existing models have primarily been developed in populations of smokers, thus there was a need to develop an accurate model in never-smokers. This study focused on developing and validating a model using never-smokers from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial. Cox regression analysis, with six-year follow-up, was used for model building. Predictors included: age, body mass index, education level, personal history of cancer, family history of lung cancer, previous chest X-ray, and secondhand smoke exposure. This model achieved fair discrimination (optimism corrected c-statistic = 0.6645) and good calibration. This represents an improvement on existing neversmoker models, but is not suitable for individual-level risk prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a general stochastic framework and an equilibrium asset pricing model that make clear how attitudes towards intertemporal substitution and risk matter for option pricing. In particular, we show under which statistical conditions option pricing formulas are not preference-free, in other words, when preferences are not hidden in the stock and bond prices as they are in the standard Black and Scholes (BS) or Hull and White (HW) pricing formulas. The dependence of option prices on preference parameters comes from several instantaneous causality effects such as the so-called leverage effect. We also emphasize that the most standard asset pricing models (CAPM for the stock and BS or HW preference-free option pricing) are valid under the same stochastic setting (typically the absence of leverage effect), regardless of preference parameter values. Even though we propose a general non-preference-free option pricing formula, we always keep in mind that the BS formula is dominant both as a theoretical reference model and as a tool for practitioners. Another contribution of the paper is to characterize why the BS formula is such a benchmark. We show that, as soon as we are ready to accept a basic property of option prices, namely their homogeneity of degree one with respect to the pair formed by the underlying stock price and the strike price, the necessary statistical hypotheses for homogeneity provide BS-shaped option prices in equilibrium. This BS-shaped option-pricing formula allows us to derive interesting characterizations of the volatility smile, that is, the pattern of BS implicit volatilities as a function of the option moneyness. First, the asymmetry of the smile is shown to be equivalent to a particular form of asymmetry of the equivalent martingale measure. Second, this asymmetry appears precisely when there is either a premium on an instantaneous interest rate risk or on a generalized leverage effect or both, in other words, whenever the option pricing formula is not preference-free. Therefore, the main conclusion of our analysis for practitioners should be that an asymmetric smile is indicative of the relevance of preference parameters to price options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We characterize the solution to a model of consumption smoothing using financing under non-commitment and savings. We show that, under certain conditions, these two different instruments complement each other perfectly. If the rate of time preference is equal to the interest rate on savings, perfect smoothing can be achieved in finite time. We also show that, when random revenues are generated by periodic investments in capital through a concave production function, the level of smoothing achieved through financial contracts can influence the productive investment efficiency. As long as financial contracts cannot achieve perfect smoothing, productive investment will be used as a complementary smoothing device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.