80 resultados para Inventory Models
Resumo:
The goal of this study was to examine the role of organizational causal attribution in understanding the relation of work stressors (work-role overload, excessive role responsibility, and unpleasant physical environment) and personal resources (social support and cognitive coping) to such organizational-attitudinal outcomes as work engagement, turnover intention, and organizational identification. In some analyses, cognitive coping was also treated as an organizational outcome. Causal attribution was conceptualized in terms of four dimensions: internality-externality, attributing the cause of one’s successes and failures to oneself, as opposed to external factors, stability (thinking that the cause of one’s successes and failures is stable over time), globality (perceiving the cause to be operative on many areas of one’s life), and controllability (believing that one can control the causes of one’s successes and failures). Several hypotheses were derived from Karasek’s (1989) Job Demands–Control (JD-C) model and from the Job Demands–Resources (JD-R) model (Demerouti, Bakker, Nachreiner & Schaufeli, 2001). Based on the JD-C model, a number of moderation effects were predicted, stating that the strength of the association of work stressors with the outcome variables (e.g. turnover intentions) varies as a function of the causal attribution; for example, unpleasant work environment is more strongly associated with turnover intention among those with an external locus of causality than among those with an internal locuse of causality. From the JD-R model, a number of hypotheses on the mediation model were derived. They were based on two processes posited by the model: an energy-draining process in which work stressors along with a mediating effect of causal attribution for failures deplete the nurses’ energy, leading to turnover intention, and a motivational process in which personal resources along with a mediating effect of causal attribution for successes foster the nurses’ engagement in their work, leading to higher organizational identification and to decreased intention to leave the nursing job. For instance, it was expected that the relationship between work stressors and turnover intention could be explained (mediated) by a tendency to attribute one’s work failures to stable causes. The data were collected from among Finnish hospital nurses using e-questionnaires. Overall 934 nurses responded the questionnaires. Work stressors and personal resources were measured by five scales derived from the Occupational Stress Inventory-Revised (Osipow, 1998). Causal attribution was measured using the Occupational Attributional Style Questionnaire (Furnham, 2004). Work engagement was assessed through the Utrecht Work Engagement Scale (Schaufeli & al., 2002), turnover intention by the Van Veldhoven & Meijman (1994) scale, and organizational identification by the Mael & Ashforth (1992) measure. The results provided support for the function of causal attribution in the overall work stress process. Findings related to the moderation model can be divided into three main findings. First, external locus of causality along with job level moderated the relationship between work overload and cognitive coping. Hence, this interaction was evidenced only among nurses in non-supervisory positions. Second, external locus of causality and job level together moderated the relationship between physical environment and turnover intention. An opposite pattern of interaction was found for this interaction: among nurses, externality exacerbated the effect of perceived unpleasantness of the physical environment on turnover intention, whereas among supervisors internality produced the same effect. Third, job level also disclosed a moderation effect for controllability attribution over the relationship between physical environment and cognitive coping. Findings related to the mediation model for the energetic process indicated that the partial model in which work stressors have also a direct effect on turnover intention fitted the data better. In the mediation model for the motivational process, an intermediate mediation effect in which the effects of personal resources on turnover intention went through two mediators (e.g., causal dimensions and organizational identification) fitted the data better. All dimensions of causal attribution appeared to follow a somewhat unique pattern of mediation effect not only for energetic but also for motivational processes. Overall findings on mediation models partly supported the two simultaneous underlying processes proposed by the JD-R model. While in the energetic process the dimension of externality mediated the relationship between stressors and turnover partially, all the dimensions of causal attribution appeared to entail significant mediator effects in the motivational process. The general findings supported the moderation effect and the mediation effect of causal attribution in the work stress process. The study contributes to several research traditions, including the interaction approach, the JD-C, and the JD-R models. However, many potential functions of organizational causal attribution are yet to be evaluated by relevant academic and organizational research. Keywords: organizational causal attribution, optimistic / pessimistic attributional style, work stressors, organisational stress process, stressors in nursing profession, hospital nursing, JD-R model, personal resources, turnover intention, work engagement, organizational identification.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
We present the results of a search for Higgs bosons predicted in two-Higgs-doublet models, in the case where the Higgs bosons decay to tau lepton pairs, using 1.8 inverse fb of integrated luminosity of proton-antiproton collisions recorded by the CDF II experiment at the Fermilab Tevatron. Studying the observed mass distribution in events where one or both tau leptons decay leptonically, no evidence for a Higgs boson signal is observed. The result is used to infer exclusion limits in the two-dimensional parameter space of tan beta versus m(A).
Resumo:
We present the results of a search for Higgs bosons predicted in two-Higgs-doublet models, in the case where the Higgs bosons decay to tau lepton pairs, using 1.8 inverse fb of integrated luminosity of proton-antiproton collisions recorded by the CDF II experiment at the Fermilab Tevatron. Studying the observed mass distribution in events where one or both tau leptons decay leptonically, no evidence for a Higgs boson signal is observed. The result is used to infer exclusion limits in the two-dimensional parameter space of tan beta versus m(A).
Resumo:
We combine results from searches by the CDF and D0 collaborations for a standard model Higgs boson (H) in the process gg->H->W+W- in p=pbar collisions at the Fermilab Tevatron Collider at sqrt{s}=1.96 TeV. With 4.8 fb-1 of integrated luminosity analyzed at CDF and 5.4 fb-1 at D0, the 95% Confidence Level upper limit on \sigma(gg->H) x B(H->W+W-) is 1.75 pb at m_H=120 GeV, 0.38 pb at m_H=165 GeV, and 0.83 pb at m_H=200 GeV. Assuming the presence of a fourth sequential generation of fermions with large masses, we exclude at the 95% Confidence Level a standard-model-like Higgs boson with a mass between 131 and 204 GeV.
Resumo:
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.
Resumo:
The question at issue in this dissertation is the epistemic role played by ecological generalizations and models. I investigate and analyze such properties of generalizations as lawlikeness, invariance, and stability, and I ask which of these properties are relevant in the context of scientific explanations. I will claim that there are generalizable and reliable causal explanations in ecology by generalizations, which are invariant and stable. An invariant generalization continues to hold or be valid under a special change called an intervention that changes the value of its variables. Whether a generalization remains invariant during its interventions is the criterion that determines whether it is explanatory. A generalization can be invariant and explanatory regardless of its lawlike status. Stability deals with a generality that has to do with holding of a generalization in possible background conditions. The more stable a generalization, the less dependent it is on background conditions to remain true. Although it is invariance rather than stability of generalizations that furnishes us with explanatory generalizations, there is an important function that stability has in this context of explanations, namely, stability furnishes us with extrapolability and reliability of scientific explanations. I also discuss non-empirical investigations of models that I call robustness and sensitivity analyses. I call sensitivity analyses investigations in which one model is studied with regard to its stability conditions by making changes and variations to the values of the model s parameters. As a general definition of robustness analyses I propose investigations of variations in modeling assumptions of different models of the same phenomenon in which the focus is on whether they produce similar or convergent results or not. Robustness and sensitivity analyses are powerful tools for studying the conditions and assumptions where models break down and they are especially powerful in pointing out reasons as to why they do this. They show which conditions or assumptions the results of models depend on. Key words: ecology, generalizations, invariance, lawlikeness, philosophy of science, robustness, explanation, models, stability
Resumo:
Yhteenveto: Talvivirtaamien redukointi vesistömallien avulla
Resumo:
In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.
Resumo:
This study examined the effects of the Greeks of the options and the trading results of delta hedging strategies, with three different time units or option-pricing models. These time units were calendar time, trading time and continuous time using discrete approximation (CTDA) time. The CTDA time model is a pricing model, that among others accounts for intraday and weekend, patterns in volatility. For the CTDA time model some additional theta measures, which were believed to be usable in trading, were developed. The study appears to verify that there were differences in the Greeks with different time units. It also revealed that these differences influence the delta hedging of options or portfolios. Although it is difficult to say anything about which is the most usable of the different time models, as this much depends on the traders view of the passing of time, different market conditions and different portfolios, the CTDA time model can be viewed as an attractive alternative.
Resumo:
This paper examines how volatility in financial markets can preferable be modeled. The examination investigates how good the models for the volatility, both linear and nonlinear, are in absorbing skewness and kurtosis. The examination is done on the Nordic stock markets, including Finland, Sweden, Norway and Denmark. Different linear and nonlinear models are applied, and the results indicates that a linear model can almost always be used for modeling the series under investigation, even though nonlinear models performs slightly better in some cases. These results indicate that the markets under study are exposed to asymmetric patterns only to a certain degree. Negative shocks generally have a more prominent effect on the markets, but these effects are not really strong. However, in terms of absorbing skewness and kurtosis, nonlinear models outperform linear ones.
Resumo:
This study evaluates three different time units in option pricing: trading time, calendar time and continuous time using discrete approximations (CTDA). The CTDA-time model partitions the trading day into 30-minute intervals, where each interval is given a weight corresponding to the historical volatility in the respective interval. Furthermore, the non-trading volatility, both overnight and weekend volatility, is included in the first interval of the trading day in the CTDA model. The three models are tested on market prices. The results indicate that the trading-time model gives the best fit to market prices in line with the results of previous studies, but contrary to expectations under non-arbitrage option pricing. Under non-arbitrage pricing, the option premium should reflect the cost of hedging the expected volatility during the option’s remaining life. The study concludes that the historical patterns in volatility are not fully accounted for by the market, rather the market prices options closer to trading time.