941 resultados para CONDITIONAL HETEROSKEDASTICITY
Resumo:
Margin policy is used by regulators for the purpose of inhibiting exceSSIve volatility and stabilizing the stock market in the long run. The effect of this policy on the stock market is widely tested empirically. However, most prior studies are limited in the sense that they investigate the margin requirement for the overall stock market rather than for individual stocks, and the time periods examined are confined to the pre-1974 period as no change in the margin requirement occurred post-1974 in the U.S. This thesis intends to address the above limitations by providing a direct examination of the effect of margin requirement on return, volume, and volatility of individual companies and by using more recent data in the Canadian stock market. Using the methodologies of variance ratio test and event study with conditional volatility (EGARCH) model, we find no convincing evidence that change in margin requirement affects subsequent stock return volatility. We also find similar results for returns and trading volume. These empirical findings lead us to conclude that the use of margin policy by regulators fails to achieve the goal of inhibiting speculating activities and stabilizing volatility.
Resumo:
Stimulus equivalence involves teaching two conditional discriminations that share one stimulus in common and testing all possible conditional discriminations not taught (Saunders & Green, 1999). Despite considerable research in the laboratory, applied studies of stimulus equivalence have been limited (Vause, Martin, Marion, & Sakko, 2005). This study investigated the field-effectiveness of stimulus equivalence in teaching reading skills to children with Autism. Participants were four children with Autism receiving centre-based intensive behavioural intervention (lBI) treatment. Three of the participants, who already matched pictures to their dictated names, demonstrated six to eight more emergent performances after being taught only to match written words to the same names. One participant struggled with the demands of the study and his participation was discontinued. Results suggest that stimulus equivalence provided an effective and efficient teaching strategy for three of the four participants in this study.
Resumo:
Over the years, researchers have investigated direct, conditional, and meditational pathways of adolescent aggression in relation to both temperament and parenting behaviours. However, no study to date has considered these relations with respect to a measure of aggression differentiated by form (e.g., overt, relational) and function (e.g., proactive, reactive). The present study examined the differential association of adolescent temperament and authoritative parenting on four subtypes of aggression. Participants included mothers, fathers, and one adolescent (between the ages of 10-19) from 663 families, recruited through random digit dialing. Parents reported on their child's temperament and occurrence of aggressive behaviours in addition to the perception of their own authoritative parenting. Adolescents reported on their own temperament and aggressive behaviours as well as on both their mother and father's authoritative parenting. Multiple regression analyses confirmed predictions that some aspects of temperament and authoritative parenting provide motivation towards the engagement of different aggressive behaviours. For example, higher negative affect was related to reactive types of aggression, whereas a strong desire for novel or risky behaviours related to proactive aggression. However, differences in effortful control altered the trajectory for both relationships. Higher levels of self-regulation reduced the impact of negative affect on reactive-overt aggression. Greater self-regulation also reduced the impact of surgency on proactive-overt aggression when age was a factor. Structural equation modeling was then used to assess the process through which adolescents become more or less susceptible to impulsive behaviours. Although the issue ofbi-directionality cannot be ruled out, temperament characteristics were the proximal correlate for aggression subtypes as opposed to authoritative parenting dimensions. Effortful control was found to partially mediate the relation between parental acceptancelinvolvement and reactive-relational and reactive-overt aggression, suggesting that higher levels of warmth and support as perceived by the child related to increased levels of self-regulation and emotional control, which in tum lead to less reactive-relational and less reactive-overt types of aggression in adolescents. On the other hand, negative affect partially mediated the relation between parental psychological autonomy granting and these two subtypes of aggression, supporting predictions that higher levels of autonomy granting (perceived independence) related to lower levels of frustration, which in tum lead to less reactive-relational and reactive-overt aggression in adolescents. Both findings provide less evidence for the evocative person-environment correlation and more support for temperament being an open system shaped by experience and authoritative parenting dimensions. As one of the first known studies examining the differential association of authoritative parenting and temperament on aggression subtypes, this study demonstrates the role parents can play in shaping and altering their children's temperament and the effects it can have on aggressive behaviour.
Resumo:
This thesis studies the impact of macroeconomic announcements on the U.S. Treasury market and investigates profitable opportunities around macroeconomic announcements using data from the eSpeed electronic trading platform. We investigate how macroeconomic announcements affect the return predictability of trade imbalance for the 2-year, 5-year, IO-year U.S. Treasury notes and 30-year U.S. Treasury bonds. The goal of this thesis is to develop a methodology to identify informed trades and estimate the trade imbalance based on informed trades. We use the daily order book slope as a proxy for dispersion of beliefs among investors. Regression results in this thesis indicate that, on announcement days with a high dispersion of beliefs, daily trade imbalance estimated by informed trades significantly predicts returns on the following day. In addition, we develop a trade-imbalance based trading strategy conditional on dispersion of beliefs, informed trades, and announcement days. The trading strategy yields significantly positive net returns for the 2-year T-notes.
Resumo:
Consumption values and different usage situations have received extensive interest from scholars; however, there is a lack of understanding regarding how these two constructs interact when it comes to the purchase decisions of consumers. This study examines the relationship between consumption values, consumption situations, and consumers’ purchasing decisions in terms of their willingness to pay and the purchase quantity. First of all, my model proposes that all four consumption values and different situations have a positive effect on consumers’ willingness to pay as well as the quantity they purchase. It also proposes that varying usage situations moderate the effect of consumption values on consumers’ purchasing decisions. In my conceptual model, I have also integrated the epistemic and conditional values where there is a gap in the existing literature. Prior literature has isolated the consumption values when studying how they affect consumer behavior and has not examined how consumption situations moderate the relationship between consumption values and purchasing decisions. Also, the existing literature has mostly focused on how consumption values affect purchase intentions, brand loyalty, or satisfaction, whereas my study focuses on purchasing decisions. For my study, the participants were randomly chosen from the general wine consumer population and the age range was between 20 and 75, which included 83 male respondents and 119 female respondents. The data received from my respondents support my hypotheses for the model. In my final chapter, I discuss the theoretical and managerial implications as well as suggestions for future research.
Resumo:
Symmetry group methods are applied to obtain all explicit group-invariant radial solutions to a class of semilinear Schr¨odinger equations in dimensions n = 1. Both focusing and defocusing cases of a power nonlinearity are considered, including the special case of the pseudo-conformal power p = 4/n relevant for critical dynamics. The methods involve, first, reduction of the Schr¨odinger equations to group-invariant semilinear complex 2nd order ordinary differential equations (ODEs) with respect to an optimal set of one-dimensional point symmetry groups, and second, use of inherited symmetries, hidden symmetries, and conditional symmetries to solve each ODE by quadratures. Through Noether’s theorem, all conservation laws arising from these point symmetry groups are listed. Some group-invariant solutions are found to exist for values of n other than just positive integers, and in such cases an alternative two-dimensional form of the Schr¨odinger equations involving an extra modulation term with a parameter m = 2−n = 0 is discussed.
Resumo:
cell of origin and triggering events for leukaemia are mostly unknown. Here we show that the bone marrow contains a progenitor that expresses renin throughout development and possesses a B-lymphocyte pedigree. This cell requires RBP-J to differentiate. Deletion of RBP-J in these renin-expressing progenitors enriches the precursor B-cell gene programme and constrains lymphocyte differentiation, facilitated by H3K4me3 activating marks in genes that control the pre-B stage. Mutant cells undergo neoplastic transformation, and mice develop a highly penetrant B-cell leukaemia with multi-organ infiltration and early death. These reninexpressing cells appear uniquely vulnerable as other conditional models of RBP-J deletion do not result in leukaemia. The discovery of these unique renin progenitors in the bone marrow and the model of leukaemia described herein may enhance our understanding of normal and neoplastic haematopoiesis.
Resumo:
We assess the predictive ability of three VPIN metrics on the basis of two highly volatile market events of China, and examine the association between VPIN and toxic-induced volatility through conditional probability analysis and multiple regression. We examine the dynamic relationship on VPIN and high-frequency liquidity using Vector Auto-Regression models, Granger Causality tests, and impulse response analysis. Our results suggest that Bulk Volume VPIN has the best risk-warning effect among major VPIN metrics. VPIN has a positive association with market volatility induced by toxic information flow. Most importantly, we document a positive feedback effect between VPIN and high-frequency liquidity, where a negative liquidity shock boosts up VPIN, which, in turn, leads to further liquidity drain. Our study provides empirical evidence that reflects an intrinsic game between informed traders and market makers when facing toxic information in the high-frequency trading world.
Resumo:
This case study examines how The City, Inc.’s work within North and South Minneapolis, Minnesota neighborhoods from 1987 and 1992 was framed within a compilation of articles drawn from prominent Twin Cities’ daily newspapers. Positioned within a conceptual framework based on the ethical philosophy of Emmanuel Levinas, this study explores how the idea of community, as constructed and reinforced through organizational initiatives and local print media, impacts the everyday relationships of those within and between communities. Framed within a discourse analysis, Levinasian ethics considers what aspects of community discourse restrict and oppress the relation with the other. The study concludes by suggesting how the identified aspects of conditional belonging, finding the trace, and building community can be valuable in offering an alternative to assessment-style research by considering the relationship and responsibility of the one for the other.
Resumo:
The purpose of this project was to develop an instructors’ handbook that provides the declarative, procedural, and conditional knowledge associated with the interactive instructional approach, differentiated instruction, and the gradual release of responsibility framework for teaching reading to English as a second language adult literacy learners. The need for this handbook was determined by conducting a critical analysis of existing handbooks and concluding that no handbook completely addressed the 3 types of knowledge for the 3 instructional processes. A literature review was conducted to examine the nature, use, and effectiveness of the 3 instructional processes when teaching reading to ESL adult literacy learners. The literature review also examined teachers’ preferences for reading research and found that texts that were relevant, practical, and accessible were favoured. Hence, these 3 elements were incorporated as part of the handbook design. Three peer reviewers completed a 35-item 5-point Likert scale evaluation form that also included 5 open-ended questions. Their feedback about the handbook’s relevancy, practicality, accessibility, and face validity were incorporated into the final version of the handbook presented here. Reference to the handbook by ESL adult literacy instructors has the potential to support evidence-informed lesson planning which can support the ESL adult literacy learners in achieving their goals and contributing to their societies in multiple and meaningful ways.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
L’objectif de ce papier est de déterminer les facteurs susceptibles d’expliquer les faillites bancaires au sein de l’Union économique et monétaire ouest-africaine (UEMOA) entre 1980 et 1995. Utilisant le modèle logit conditionnel sur des données en panel, nos résultats montrent que les variables qui affectent positivement la probabilité de faire faillite des banques sont : i) le niveau d’endettement auprès de la banque centrale; ii) un faible niveau de comptes disponibles et à vue; iii) les portefeuilles d’effets commerciaux par rapport au total des crédits; iv) le faible montant des dépôts à terme de plus de 2 ans à 10 ans par rapport aux actifs totaux; et v) le ratio actifs liquides sur actifs totaux. En revanche, les variables qui contribuent positivement sur la vraisemblance de survie des banques sont les suivantes : i) le ratio capital sur actifs totaux; ii) les bénéfices nets par rapport aux actifs totaux; iii) le ratio crédit total sur actifs totaux; iv) les dépôts à terme à 2 ans par rapport aux actifs totaux; et v) le niveau des engagements sous forme de cautions et avals par rapport aux actifs totaux. Les ratios portefeuilles d’effets commerciaux et actifs liquides par rapport aux actifs totaux sont les variables qui expliquent la faillite des banques commerciales, alors que ce sont les dépôts à terme de plus de 2 ans à 10 ans qui sont à l’origine des faillites des banques de développement. Ces faillites ont été considérablement réduites par la création en 1989 de la commission de réglementation bancaire régionale. Dans l’UEMOA, seule la variable affectée au Sénégal semble contribuer positivement sur la probabilité de faire faillite.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.