857 resultados para hierarchical factor model
Resumo:
Previous research shows that correlations tend to increase in magnitude when individuals are aggregated across groups. This suggests that uncorrelated constellations of personality variables (such as the primary scales of Extraversion and Neuroticism) may display much higher correlations in aggregate factor analysis. We hypothesize and report that individual level factor analysis can be explained in terms of Giant Three (or Big Five) descriptions of personality, whereas aggregate level factor analysis can be explained in terms of Gray's physiological based model. Although alternative interpretations exist, aggregate level factor analysis may correctly identify the basis of an individual's personality as a result of better reliability of measures due to aggregation. We discuss the implications of this form of analysis in terms of construct validity, personality theory, and its applicability in general. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
The present study adds to the sparse published Australian literature on the size effect, the book to market (BM) effect and the ability of the Fama French three factor model to account for these effects and to improve on the asset pricing ability of the Capital Asset Pricing Model (CAPM). The present study extends the 1981–1991 period examined by Halliwell, Heaney and Sawicki (1999) a further 10 years to 2000 and addresses several limitations and findings of that research. In contrast to Halliwell, Heaney and Sawicki the current study finds the three factor model provides significantly improved explanatory power over the CAPM, and evidence that the BM factor plays a role in asset pricing.
Resumo:
The Eysenck Personality Questionnaire-Revised (EPQ-R), the Eysenck Personality Profiler Short Version (EPP-S), and the Big Five Inventory (BFI-V4a) were administered to 135 postgraduate students of business in Pakistan. Whilst Extraversion and Neuroticism scales from the three questionnaires were highly correlated, it was found that Agreeableness was most highly correlated with Psychoticism in the EPQ-R and Conscientiousness was most highly correlated with Psychoticism in the EPP-S. Principal component analyses with varimax rotation were carried out. The analyses generally suggested that the five factor model rather than the three-factor model was more robust and better for interpretation of all the higher order scales of the EPQ-R, EPP-S, and BFI-V4a in the Pakistani data. Results show that the superiority of the five factor solution results from the inclusion of a broader variety of personality scales in the input data, whereas Eysenck's three factor solution seems to be best when a less complete but possibly more important set of variables are input. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This study examines the role of illiquidity (proxied by the proportion of zero returns) as an additional risk factor in asset pricing. We use Portuguese monthly data, covering the period between January 1988 and December 2008. We compute an illiquidity factor using the Fama and French [Fama, E. F., and K. R. French (1993), "Common risk factors in the returns on stocks and bonds", Journal of Financial Economics, Vol. 33, Nº. 1, pp. 3-56] procedure and analyze the performance of CAPM, Fama-French three-factor model and illiquidity-augmented versions of these models in explaining both the time-series and the cross-section of returns. Our results reveal that the effect of characteristic liquidity is subsumed by the models considered, but the risk of illiquidity is not priced in the Portuguese stock market.
Resumo:
Este estudo teve como objetivo avaliar as qualidades psicométricas do Fear Survey Schedule-III, em uma amostra portuguesa. Participaram 1.980 sujeitos selecionados por conveniência a partir de uma população adulta normativa. As idades dos participantes estavam compreendidas entre os 18 e os 80 anos (M = 39,5, DP = 8,5), sendo 59% do sexo feminino. As qualidades psicométricas da escala foram avaliadas em suas facetas de sensibilidade psicométrica, validade de construto e confi abilidade. A validade externa de construto foi avaliada com análise multigrupos em amostra aleatória e independente da amostra de validação inicial. O modelo fatorial original proposto apresentou um ajustamento inaceitável à amostra de validação. Procedeu-se ao refi namento do modelo de medida em uma parte da amostra, selecionada aleatoriamente. Em conclusão, o modelo de medida simplifi cado apresentou uma boa qualidade de ajustamento fatorial e foi invariante em uma segunda amostra independente da primeira. Propôs-se uma nova estrutura hierárquica, com fator de 2ª ordem designado por “Medos”, que revelou boas qualidades psicométricas (sensibilidade, validade de construto e confi abilidade).
Resumo:
There is recent interest in the generalization of classical factor models in which the idiosyncratic factors are assumed to be orthogonal and there are identification restrictions on cross-sectional and time dimensions. In this study, we describe and implement a Bayesian approach to generalized factor models. A flexible framework is developed to determine the variations attributed to common and idiosyncratic factors. We also propose a unique methodology to select the (generalized) factor model that best fits a given set of data. Applying the proposed methodology to the simulated data and the foreign exchange rate data, we provide a comparative analysis between the classical and generalized factor models. We find that when there is a shift from classical to generalized, there are significant changes in the estimates of the structures of the covariance and correlation matrices while there are less dramatic changes in the estimates of the factor loadings and the variation attributed to common factors.
Resumo:
This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.
Resumo:
According to the most widely accepted Cattell-Horn-Carroll (CHC) model of intelligence measurement, each subtest score of the Wechsler Intelligence Scale for Adults (3rd ed.; WAIS-III) should reflect both 1st- and 2nd-order factors (i.e., 4 or 5 broad abilities and 1 general factor). To disentangle the contribution of each factor, we applied a Schmid-Leiman orthogonalization transformation (SLT) to the standardization data published in the French technical manual for the WAIS-III. Results showed that the general factor accounted for 63% of the common variance and that the specific contributions of the 1st-order factors were weak (4.7%-15.9%). We also addressed this issue by using confirmatory factor analysis. Results indicated that the bifactor model (with 1st-order group and general factors) better fit the data than did the traditional higher order structure. Models based on the CHC framework were also tested. Results indicated that a higher order CHC model showed a better fit than did the classical 4-factor model; however, the WAIS bifactor structure was the most adequate. We recommend that users do not discount the Full Scale IQ when interpreting the index scores of the WAIS-III because the general factor accounts for the bulk of the common variance in the French WAIS-III. The 4 index scores cannot be considered to reflect only broad ability because they include a strong contribution of the general factor.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data
Resumo:
This study was designed to investigate personality development with children aged 8 to 12. For this purpose, Children's self-perceptions were compared to parent's ratings. 506 children and their parents completed a selection of 38 questions from the Hierarchical Personality Inventory for Children (HiPIC). Results showed an age-related increase in the structural congruence of children's ratings compared to parents' ratings and a highly significant increase in the reliabilities of both parents' and children's assessments. The mean correlation between the children's self-descriptions and parents' ratings were higher for Conscientiousness and Imagination than for Extraversion, Benevolence and Emotional Stability and significantly increased with the children's age. Mean-levels decreased with age for Imagination in parents' ratings and for Benevolence, Conscientiousness, and Imagination, in children's ratings. This study showed that personality development from 8 to 12 years goes along with an increase in the agreement between the children's self-perceptions and the parents' perceptions of the children's personality.
Resumo:
Recent literature evidences differential associations of personal and general just-world beliefs with constructs in the interpersonal domain. In line with this research, we examine the respective relationships of each just-world belief with the Five-Factor and the HEXACO models of personality in one representative sample of the working population of Switzerland and one sample of the general US population, respectively. One suppressor effect was observed in both samples: Neuroticism and emotionality was positively associated with general just-world belief, but only after controlling for personal just-world belief. In addition, agreeableness was positively and honesty-humility negatively associated with general just-world belief but unrelated to personal just-world belief. Conscientiousness was consistently unrelated to any of the just-world belief and extraversion and openness to experience revealed unstable coefficients across studies. We discuss these points in light of just-world theory and their implications for future research taking both dimensions into account.
Resumo:
This thesis tested a path model of the relationships of reasons for drinking and reasons for limiting drinking with consumption of alcohol and drinking problems. It was hypothesized that reasons for drinking would be composed of positively and negatively reinforcing reasons, and that reasons for limiting drinking would be composed of personal and social reasons. Problem drinking was operationalized as consisting of two factors, consumption and drinking problems, with a positive relationship between the two. It was predicted that positively and negatively reinforcing reasons for drinking would be associated with heavier consumption and, in turn, more drinking problems, through level of consumption. Negatively reinforcing reasons were also predicted to be associated with drinking problems directly, independent of level of consumption. It was hypothesized that reasons for limiting drinking would be associated with lower levels of consumption and would be related to fewer drinking problems, through level of consumption. Finally, among women, reasons for limiting drinking were expected to be associated with drinking problems directly, independent of level of consumption. The sample, was taken from the second phase of the Niagara Young Aduh Health Study, a community sample of young adult men and women. Measurement models of reasons for drinking, reasons for limiting drinking, and problem drinking were tested using Confirmatory Factor Analysis. After adequate fit of each measurement model was obtained, the complete structural model, with all hypothesized paths, was tested for goodness of fit. Cross-group equality constraints were imposed on all models to test for gender differences. The results provided evidence supporting the hypothesized structure of reasons for drinking and problem drinking. A single factor model of reasons for limiting drinking was used in the analyses because a two-factor model was inadequate. Support was obtained for the structural model. For example, the resuhs revealed independent influences of Positively Reinforcing Reasons for Drinking, Negatively Reinforcing Reasons for Drinking, and Reasons for Limiting Drinking on consumption. In addition. Negatively Reinforcing Reasons helped to account for Drinking Problems independent of the amount of alcohol consumed. Although an additional path from Reasons for Limiting Drinking to Drinking Problems was hypothesized for women, it was of marginal significance and did not improve the model's fit. As a result, no sex differences in the model were found. This may be a result of the convergence of drinking patterns for men and women. Furthermore, it is suggested that gender differences may only be found in clinical samples of problem drinkers, where the relative level of consumption for women and men is similar.
Resumo:
Les logiciels utilisés sont Splus et R.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.