987 resultados para complexity theory
Resumo:
We present a resonating-valence-bond theory of superconductivity for the Hubbard-Heisenberg model on an anisotropic triangular lattice. Our calculations are consistent with the observed phase diagram of the half-filled layered organic superconductors, such as the beta, beta('), kappa, and lambda phases of (BEDT-TTF)(2)X [bis(ethylenedithio)tetrathiafulvalene] and (BETS)(2)X [bis(ethylenedithio)tetraselenafulvalene]. We find a first order transition from a Mott insulator to a d(x)(2)-y(2) superconductor with a small superfluid stiffness and a pseudogap with d(x)(2)-y(2) symmetry.
Resumo:
A survey study of twenty-two Australian CEOs and their subordinates assessed relationships between Australian leader motives, Australian value based leader behaviour, subordinate tall poppy attitudes and subordinate commitment, effectiveness, motivation and satisfaction (CEMS). On the whole, the results showed general support for value based leadership processes. Subsequent regression analyses of the second main component of Value Based Leadership Theory, value based leader behaviour, revealed that the collectivistic, inspirational, integrity and visionary behaviour sub-scales of the construct were positively related with subordinate CEMS. Although the hypothesis that subordinate tall poppy attitudes would moderate value based leadership processes was not clearly supported, subsequent regression analyses found that subordinate tall poppy attitudes were negatively related with perceptions of value based leader behaviour and CEMS. These findings suggest complex relationships between the three constructs, and the proposed model for the Australian context is accordingly amended. Overall, the research supports the need to consider cultural-specific attitudes in management development.
Resumo:
The theory of Owicki and Gries has been used as a platform for safety-based verifcation and derivation of concurrent programs. It has also been integrated with the progress logic of UNITY which has allowed newer techniques of progress-based verifcation and derivation to be developed. However, a theoretical basis for the integrated theory has thus far been missing. In this paper, we provide a theoretical background for the logic of Owicki and Gries integrated with the logic of progress from UNITY. An operational semantics for the new framework is provided which is used to prove soundness of the progress logic.
Resumo:
There are many factors which affect the L2 learner’s performance at the levels of phonology, morphology and syntax. Consequently when L2 learners attempt to communicate in the target language, their language production will show systematic variability across the above mentioned linguistic domains. This variation can be attributed to some factors such as interlocutors, topic familiarity, prior knowledge, task condition, planning time and tasks types. This paper reports the results of an on going research investigating the issue of variability attributed to the task type. It is hypothesized that the particular type of task learners are required to perform will result in variation in their performance. Results of the statistical analyses of this study investigating the issue of variation in the performance of twenty L2 learners at the English department of Tabriz University provided evidence in support of the hypothesis that performance of L2 learners show systematic variability attributed to task.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Polytomous Item Response Theory Models provides a unified, comprehensive introduction to the range of polytomous models available within item response theory (IRT). It begins by outlining the primary structural distinction between the two major types of polytomous IRT models. This focuses on the two types of response probability that are unique to polytomous models and their associated response functions, which are modeled differently by the different types of IRT model. It describes, both conceptually and mathematically, the major specific polytomous models, including the Nominal Response Model, the Partial Credit Model, the Rating Scale model, and the Graded Response Model. Important variations, such as the Generalized Partial Credit Model are also described as are less common variations, such as the Rating Scale version of the Graded Response Model. Relationships among the models are also investigated and the operation of measurement information is described for each major model. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements as they are described.
Resumo:
The Systems Theory Framework was developed to produce a metatheoretical framework through which the contribution of all theories to our understanding of career behaviour could be recognised. In addition it emphasises the individual as the site for the integration of theory and practice. Its utility has become more broadly acknowledged through its application to a range of cultural groups and settings, qualitative assessment processes, career counselling, and multicultural career counselling. For these reasons, the STF is a very valuable addition to the field of career theory. In viewing the field of career theory as a system, open to changes and developments from within itself and through constantly interrelating with other systems, the STF and this book is adding to the pattern of knowledge and relationships within the career field. The contents of this book will be integrated within the field as representative of a shift in understanding existing relationships within and between theories. In the same way, each reader will integrate the contents of the book within their existing views about the current state of career theory and within their current theory-practice relationship. This book should be required reading for anyone involved in career theory. It is also highly suitable as a text for an advanced career counselling or theory course.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
Potential errors in the application of mixture theory to the analysis of multiple-frequency bioelectrical impedance data for the determination of body fluid volumes are assessed. Potential sources of error include: conductive length; tissue fluid resistivity; body density; weight and technical errors of measurement. Inclusion of inaccurate estimates of body density and weight introduce errors of typically < +/-3% but incorrect assumptions regarding conductive length or fluid resistivities may each incur errors of up to 20%.