62 resultados para Stepwise regression

em Aston University Research Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This retrospective study was designed to investigate the factors that influence performance in examinations comprised of multiple-choice questions (MCQs), short-answer questions (SAQs), and essay questions in an undergraduate population. Final year optometry degree examination marks were analyzed for two separate cohorts. Direct comparison found that students performed better in MCQs than essays. However, forward stepwise regression analysis of module marks compared with the overall score showed that MCQs were the least influential, and the essay or SAQ mark was a more reliable predictor of overall grade. This has implications for examination design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Risk and knowledge are two concepts and components of business management which have so far been studied almost independently. This is especially true where risk management (RM) is conceived mainly in financial terms, as for example, in the financial institutions sector. Financial institutions are affected by internal and external changes with the consequent accommodation to new business models, new regulations and new global competition that includes new big players. These changes induce financial institutions to develop different methodologies for managing risk, such as the enterprise risk management (ERM) approach, in order to adopt a holistic view of risk management and, consequently, to deal with different types of risk, levels of risk appetite, and policies in risk management. However, the methodologies for analysing risk do not explicitly include knowledge management (KM). This research examines the potential relationships between KM and two RM concepts: perceived quality of risk control and perceived value of ERM. To fulfill the objective of identifying how KM concepts can have a positive influence on some RM concepts, a literature review of KM and its processes and RM and its processes was performed. From this literature review eight hypotheses were analysed using a classification into people, process and technology variables. The data for this research was gathered from a survey applied to risk management employees in financial institutions and 121 answers were analysed. The analysis of the data was based on multivariate techniques, more specifically stepwise regression analysis. The results showed that the perceived quality of risk control is significantly associated with the variables: perceived quality of risk knowledge sharing, perceived quality of communication among people, web channel functionality, and risk management information system functionality. However, the relationships of the KM variables to the perceived value of ERM are not identified because of the low performance of the models describing these relationships. The analysis reveals important insights into the potential KM support to RM such as: the better adoption of KM people and technology actions, the better the perceived quality of risk control. Equally, the results suggest that the quality of risk control and the benefits of ERM follow different patterns given that there is no correlation between both concepts and the distinct influence of the KM variables in each concept. The ERM scenario is different from that of risk control because ERM, as an answer to RM failures and adaptation to new regulation in financial institutions, has led organizations to adopt new processes, technologies, and governance models. Thus, the search for factors influencing the perceived value of ERM implementation needs additional analysis because what is improved in RM processes individually is not having the same effect on the perceived value of ERM. Based on these model results and the literature review the basis of the ERKMAS (Enterprise Risk Knowledge Management System) is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present global economic crisis creates doubts about the good use of accumulated experience and knowledge in managing risk in financial services. Typically, risk management practice does not use knowledge management (KM) to improve and to develop new answers to the threats. A key reason is that it is not clear how to break down the “organizational silos” view of risk management (RM) that is commonly taken. As a result, there has been relatively little work on finding the relationships between RM and KM. We have been doing research for the last couple of years on the identification of relationships between these two disciplines. At ECKM 2007 we presented a general review of the literature(s) and some hypotheses for starting research on KM and its relationship to the perceived value of enterprise risk management. This article presents findings based on our preliminary analyses, concentrating on those factors affecting the perceived quality of risk knowledge sharing. These come from a questionnaire survey of RM employees in organisations in the financial services sector, which yielded 121 responses. We have included five explanatory variables for the perceived quality of risk knowledge sharing. These comprised two variables relating to people (organizational capacity for work coordination and perceived quality of communication among groups), one relating to process (perceived quality of risk control) and two related to technology (web channel functionality and RM information system functionality). Our findings so far are that four of these five variables have a significant positive association with the perceived quality of risk knowledge sharing: contrary to expectations, web channel functionality did not have a significant association. Indeed, in some of our exploratory regression studies its coefficient (although not significant) was negative. In stepwise regression, the variable organizational capacity for work coordination accounted for by far the largest part of the variation in the dependent variable perceived quality of risk knowledge sharing. The “people” variables thus appear to have the greatest influence on the perceived quality of risk knowledge sharing, even in a sector that relies heavily on technology and on quantitative approaches to decision making. We have also found similar results with the dependent variable perceived value of Enterprise Risk Management (ERM) implementation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The relationship between the daily deposition of soredia of Hypogymnia physodes (L.) Nyl. and local climatic records was studied in the field during three periods at a site in Seattle, WA, U.S.A: (1) 11 August – 16 September 1986 (Study A); (2) 16 December – 11 January 1987 (Study B) and (3) 8 July 1988 – 30 January 1989 (Study C). The soredia were trapped on adhesive strips placed at various locations on a Prunus blireiana L. tree for 24 hr periods. A correlation matrix of the data from all three studies revealed a negative correlation between soredial deposition and relative humidity; and a positive correlation with rainfall and temperature. A multiple regression and forward stepwise regression analysis selected relative humidity as the most significant climatic variable, i.e. more soredia tended to be deposited when relative humidity was low. Analysis of individual studies by multiple regression revealed: (1) no significant relationships between soredial deposition and climate in Study A; (2) positive relationships with temperature and wind speed in Study B and (3) positive relationships with wind speed and rainfall in the summer/autumn months of Study C; in the winter months no relationships with climate were found because few soredia were deposited. The data suggest that in the field seasonal photoperiod differences combined with moderately high temperatures and high relative humidity may promote soredial formation and accumulation on thalli prior to soredia dispersal. In addition, low relative humidity may promote soredial release while wind and raindrops may be possible agents of dispersal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: The purpose of this study was to determine the extent to which mobility indices (such as walking speed and postural sway), motor initiation, and cognitive function, specifically executive functions, including spatial planning, visual attention, and within participant variability, differentially predicted collisions in the near and far sides of the road with increasing age. Methods: Adults aged over 45 years participated in cognitive tests measuring executive function and visual attention (using Useful Field of View; UFoV®), mobility assessments (walking speed, sit-to-stand, self-reported mobility, and postural sway assessed using motion capture cameras), and gave road crossing choices in a two-way filmed real traffic pedestrian simulation. Results: A stepwise regression model of walking speed, start-up delay variability, and processing speed) explained 49.4% of the variance in near-side crossing errors. Walking speed, start-up delay measures (average & variability), and spatial planning explained 54.8% of the variance in far-side unsafe crossing errors. Start-up delay was predicted by walking speed only (explained 30.5%). Conclusion: Walking speed and start-up delay measures were consistent predictors of unsafe crossing behaviours. Cognitive measures, however, differentially predicted near-side errors (processing speed), and far-side errors (spatial planning). These findings offer potential contributions for identifying and rehabilitating at-risk older pedestrians.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An investigator may also wish to select a small subset of the X variables which give the best prediction of the Y variable. In this case, the question is how many variables should the regression equation include? One method would be to calculate the regression of Y on every subset of the X variables and choose the subset that gives the smallest mean square deviation from the regression. Most investigators, however, prefer to use a ‘stepwise multiple regression’ procedure. There are two forms of this analysis called the ‘step-up’ (or ‘forward’) method and the ‘step-down’ (or ‘backward’) method. This Statnote illustrates the use of stepwise multiple regression with reference to the scenario introduced in Statnote 24, viz., the influence of climatic variables on the growth of the crustose lichen Rhizocarpon geographicum (L.)DC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In most treatments of the regression problem it is assumed that the distribution of target data can be described by a deterministic function of the inputs, together with additive Gaussian noise having constant variance. The use of maximum likelihood to train such models then corresponds to the minimization of a sum-of-squares error function. In many applications a more realistic model would allow the noise variance itself to depend on the input variables. However, the use of maximum likelihood to train such models would give highly biased results. In this paper we show how a Bayesian treatment can allow for an input-dependent variance while overcoming the bias of maximum likelihood.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.