996 resultados para Latent state–trait theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods: Subjects were N = 580 patients with rheumatism, asthma, orthopedic conditions or inflammatory bowel disease, who filled out the heiQ™ at the beginning, the end of and 3 months after a disease-specific inpatient rehabilitation program in Germany. Structural equation modeling techniques were used to estimate latent trait-change models and test for measurement invariance in each heiQ™ scale. Coefficients of consistency, occasion specificity and reliability were computed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The alternative classification system for personality disorders in DSM-5 features a hierarchical model of maladaptive personality traits. This trait model comprises five broad trait domains and 25 specific trait facets that can be reliably assessed using the Personality Inventory for DSM-5 (PID-5). Although there is a steadily growing literature on the validity of the PID-5, issues of temporal stability and situational influences on test scores are currently unexplored. We addressed these issues using a sample of 611 research participants who completed the PID-5 three times, with time intervals of two months. Latent state-trait (LST) analyses for each of the 25 PID-5 trait facets showed that, on average, 79.5% of the variance was due to stable traits (i.e., consistency), and 7.7% of the variance was due to situational factors (i.e., occasion specificity). Our findings suggest that the PID-5 trait facets predominantly capture individual differences that are stable across time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the most part, the literature base for Integrated Marketing Communication (IMC) has developed from an applied or tactical level rather than from an intellectual or theoretical one. Since industry, practitioner and even academic studies have provided little insight into what IMC is and how it operates, our approach has been to investigate that other IMC community, that is, the academic or instructional group responsible for disseminating IMC knowledge. We proposed that the people providing course instruction and directing research activities have some basis for how they organize, consider and therefore instruct in the area of IMC. A syllabi analysis of 87 IMC units in six countries investigated the content of the unit, its delivery both physically and conceptually, and defined the audience of the unit. The study failed to discover any type of latent theoretical foundation that might be used as a base for understanding IMC. The students who are being prepared to extend, expand and enhance IMC concepts do not appear to be well-served by the curriculum we found in our research. The study concludes with a model for further IMC curriculum development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Definition of disease phenotype is a necessary preliminary to research into genetic causes of a complex disease. Clinical diagnosis of migraine is currently based on diagnostic criteria developed by the International Headache Society. Previously, we examined the natural clustering of these diagnostic symptoms using latent class analysis (LCA) and found that a four-class model was preferred. However, the classes can be ordered such that all symptoms progressively intensify, suggesting that a single continuous variable representing disease severity may provide a better model. Here, we compare two models: item response theory and LCA, each constructed within a Bayesian context. A deviance information criterion is used to assess model fit. We phenotyped our population sample using these models, estimated heritability and conducted genome-wide linkage analysis using Merlin-qtl. LCA with four classes was again preferred. After transformation, phenotypic trait values derived from both models are highly correlated (correlation = 0.99) and consequently results from subsequent genetic analyses were similar. Heritability was estimated at 0.37, while multipoint linkage analysis produced genome-wide significant linkage to chromosome 7q31-q33 and suggestive linkage to chromosomes 1 and 2. We argue that such continuous measures are a powerful tool for identifying genes contributing to migraine susceptibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A meso material model for polycrystalline metals is proposed, in which the tiny slip systems distributing randomly between crystal slices in micro-grains or on grain boundaries are replaced by macro equivalent slip systems determined by the work-conjugate principle. The elastoplastic constitutive equation of this model is formulated for the active hardening, latent hardening and Bauschinger effect to predict macro elastoplastic stress-strain responses of polycrystalline metals under complex loading conditions. The influence of the material property parameters on size and shape of the subsequent yield surfaces is numerically investigated to demonstrate the fundamental features of the proposed material model. The derived constitutive equation is proved accurate and efficient in numerical analysis. Compared with the self-consistent theories with crystal grains as their basic components, the present theory is much simpler in mathematical treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we address issues relating to vulnerability to economic exclusion and levels of economic exclusion in Europe. We do so by applying latent class models to data from the European Community Household Panel for thirteen countries. This approach allows us to distinguish between vulnerability to economic exclusion and exposure to multiple deprivation at a particular point in time. The results of our analysis confirm that in every country it is possible to distinguish between a vulnerable and a non-vulnerable class. Association between income poverty, life-style deprivation and subjective economic strain is accounted for by allocating individuals to the categories of this latent variable. The size of the vulnerable class varies across countries in line with expectations derived from welfare regime theory. Between class differentiation is weakest in social democratic regimes but otherwise the pattern of differentiation is remarkably similar. The key discriminatory factor is life-style deprivation, followed by income and economic strain. Social class and employment status are powerful predictors of latent class membership in all countries but the strength of these relationships varies across welfare regimes. Individual biography and life events are also related to vulnerability to economic exclusion. However, there is no evidence that they account for any significant part of the socio-economic structuring of vulnerability and no support is found for the hypothesis that social exclusion has come to transcend class boundaries and become a matter of individual biography. However, the extent of socio-economic structuring does vary substantially across welfare regimes. Levels of economic exclusion, in the sense of current exposure to multiple deprivation, also vary systematically by welfare regime and social class. Taking both vulnerability to economic exclusion and levels of exclusion into account suggests that care should be exercised in moving from evidence on the dynamic nature of poverty and economic exclusion to arguments relating to the superiority of selective over universal social policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge management theory has struggled with the concept of `knowledge creation'. Since the seminal article of Nonaka in 1991, an industry has grown up seeking to capture the knowledge in the heads and hearts of individuals so as to leverage them for organizational learning and growth. But the process of Socialization, Externalization, Combination and Internalization (SECI) outlined by Nonaka and his colleagues has essentially dealt with knowledge transfer rather than knowledge creation. This paper attempts to fill the gap in the process - from Nonaka's own addition of the need for "ba" to Snowden's suggestion of that we consider "Cynefin" as a space for knowledge creation. Drawing upon a much older theoretical frame - work the Johari Window developed in group dynamics, this paper suggests an alternative concept - latent knowledge - and introduces a different model for the process of knowledge creation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary developmental genetics brings together systematists, morphologists and developmental geneticists; it will therefore impact on each of these component disciplines. The goals and methods of phylogenetic analysis are reviewed here, and the contribution of evolutionary developmental genetics to morphological systematics, in terms of character conceptualisation and primary homology assessment, is discussed. Evolutionary developmental genetics, like its component disciplines phylogenetic systematics and comparative morphology, is concerned with homology concepts. Phylogenetic concepts of homology and their limitations are considered here, and the need for independent homology statements at different levels of biological organisation is evaluated. The role of systematics in evolutionary developmental genetics is outlined. Phylogenetic systematics and comparative morphology will suggest effective sampling strategies to developmental geneticists. Phylogenetic systematics provides hypotheses of character evolution (including parallel evolution and convergence), stimulating investigations into the evolutionary gains and losses of morphologies. Comparative morphology identifies those structures that are not easily amenable to typological categorisation, and that may be of particular interest in terms of developmental genetics. The concepts of latent homology and genetic recall may also prove useful in the evolutionary interpretation of developmental genetic data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous scintillometer measurements at multiple wavelengths (pairing visible or infrared with millimetre or radio waves) have the potential to provide estimates of path-averaged surface fluxes of sensible and latent heat. Traditionally, the equations to deduce fluxes from measurements of the refractive index structure parameter at the two wavelengths have been formulated in terms of absolute humidity. Here, it is shown that formulation in terms of specific humidity has several advantages. Specific humidity satisfies the requirement for a conserved variable in similarity theory and inherently accounts for density effects misapportioned through the use of absolute humidity. The validity and interpretation of both formulations are assessed and the analogy with open-path infrared gas analyser density corrections is discussed. Original derivations using absolute humidity to represent the influence of water vapour are shown to misrepresent the latent heat flux. The errors in the flux, which depend on the Bowen ratio (larger for drier conditions), may be of the order of 10%. The sensible heat flux is shown to remain unchanged. It is also verified that use of a single scintillometer at optical wavelengths is essentially unaffected by these new formulations. Where it may not be possible to reprocess two-wavelength results, a density correction to the latent heat flux is proposed for scintillometry, which can be applied retrospectively to reduce the error.