920 resultados para distributional equivalence


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A review of nearly three decades of cross-cultural research shows that this domain still has to address several issues regarding the biases of data collection and sampling methods, the lack of clear and consensual definitions of constructs and variables, and measurement invariance issues that seriously limit the comparability of results across cultures. Indeed, a large majority of the existing studies are still based on the anthropological model, which compares two cultures and mainly uses convenience samples of university students. This paper stresses the need to incorporate a larger variety of regions and cultures in the research designs, the necessity to theorize and identify a larger set of variables in order to describe a human environment, and the importance of overcoming methodological weaknesses to improve the comparability of measurement results. Cross-cultural psychology is at the next crossroads in it's development, and researchers can certainly make major contributions to this domain if they can address these weaknesses and challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to analyze the cross-cultural generalizability of the alternative Five-Factor Model (AFFM). The total sample was made up of 9,152 subjects from six countries: China, Germany, Italy, Spain, Switzerland, and the United States. The internal consistencies for all countries were generally similar to those found for the normative American sample. Factor analyses within cultures showed that the normative American structure was replicated in all cultures, however the congruence coefficients were slightly lower in China and Italy. A similar analysis at the facet level confirmed the high cross-cultural replicability of the AFFM. Mean-level comparisons did not always show the hypothesized effects. The mean score differences across countries were very small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study spacetime diffeomorphisms in the Hamiltonian and Lagrangian formalisms of generally covariant systems. We show that the gauge group for such a system is characterized by having generators which are projectable under the Legendre map. The gauge group is found to be much larger than the original group of spacetime diffeomorphisms, since its generators must depend on the lapse function and shift vector of the spacetime metric in a given coordinate patch. Our results are generalizations of earlier results by Salisbury and Sundermeyer. They arise in a natural way from using the requirement of equivalence between Lagrangian and Hamiltonian formulations of the system, and they are new in that the symmetries are realized on the full set of phase space variables. The generators are displayed explicitly and are applied to the relativistic string and to general relativity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we examine in detail the implementation, with its associated difficulties, of the Killing conditions and gauge fixing into the variational principle formulation of Bianchi-type cosmologies. We address problems raised in the literature concerning the Lagrangian and the Hamiltonian formulations: We prove their equivalence, make clear the role of the homogeneity preserving diffeomorphisms in the phase space approach, and show that the number of physical degrees of freedom is the same in the Hamiltonian and Lagrangian formulations. Residual gauge transformations play an important role in our approach, and we suggest that Poincaré transformations for special relativistic systems can be understood as residual gauge transformations. In the Appendixes, we give the general computation of the equations of motion and the Lagrangian for any Bianchi-type vacuum metric and for spatially homogeneous Maxwell fields in a nondynamical background (with zero currents). We also illustrate our counting of degrees of freedom in an appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The equivalence between the covariant and the noncovariant versions of a constrained system is shown to hold after quantization in the framework of the field-antifield formalism. Our study covers the cases of electromagnetism and Yang-Mills fields and sheds light on some aspects of the Faddeev-Popov method, for both the covariant and noncovariant approaches, which have not been fully clarified in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Lagrangian treatment of the quantization of first class Hamiltonian systems with constraints and Hamiltonian linear and quadratic in the momenta, respectively, is performed. The first reduce and then quantize and the first quantize and then reduce (Diracs) methods are compared. A source of ambiguities in this latter approach is pointed out and its relevance on issues concerning self-consistency and equivalence with the first reduce method is emphasized. One of the main results is the relation between the propagator obtained la Dirac and the propagator in the full space. As an application of the formalism developed, quantization on coset spaces of compact Lie groups is presented. In this case it is shown that a natural selection of a Dirac quantization allows for full self-consistency and equivalence. Finally, the specific case of the propagator on a two-dimensional sphere S2 viewed as the coset space SU(2)/U(1) is worked out. 1995 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The equivalence between the covariant and the noncovariant versions of a constrained system is shown to hold after quantization in the framework of the field-antifield formalism. Our study covers the cases of electromagnetism and Yang-Mills fields and sheds light on some aspects of the Faddeev-Popov method, for both the covariant and noncovariant approaches, which have not been fully clarified in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

(1) The common shrew Sorex araneus and Millet's shrew S. coronatus are sibling species.They are morphologically and genetically very similar but do not hybridize. Their parapatric distribution throughout south-western Europe, with a few narrow zones of distributional overlap, suggests that they are in competitive parapatry. (2) Two of these contact zones were studied; there was evidence of coexistence over periods of 2 years as well as habitat segregation. In both zones, the species segregated on litter thickness and humidity variables. (3) A simple analysis of spatial distribution showed that habitats visible in the field corresponded to the habitats selected by the species. Habitat selection was found throughout the annual life-cycle of the shrews. (4) In one contact zone, a removal experiment was performed to test whether habitat segregation is induced by interspecific interactions. The experiment showed that the species select habitats differentially when both are present and abandon habitat selection when their competitor is removed. (5) These results confirm the role of resource partitioning in promoting narrow rangesof distributional overlap between such parapatric species and qualitatively support the prediction of habitat selection theory that, in a two-species system, coexistence may be achieved by differential habitat selection to avoid competition. The results also support the view that the common shrew and Millet's shrew are in competitive parapatry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIM: Phylogenetic diversity patterns are increasingly being used to better understand the role of ecological and evolutionary processes in community assembly. Here, we quantify how these patterns are influenced by scale choices in terms of spatial and environmental extent and organismic scales. LOCATION: European Alps. METHODS: We applied 42 sampling strategies differing in their combination of focal scales. For each resulting sub-dataset, we estimated the phylogenetic diversity of the species pools, phylogenetic α-diversities of local communities, and statistics commonly used together with null models in order to infer non-random diversity patterns (i.e. phylogenetic clustering versus over-dispersion). Finally, we studied the effects of scale choices on these measures using regression analyses. RESULTS: Scale choices were decisive for revealing signals in diversity patterns. Notably, changes in focal scales sometimes reversed a pattern of over-dispersion into clustering. Organismic scale had a stronger effect than spatial and environmental extent. However, we did not find general rules for the direction of change from over-dispersion to clustering with changing scales. Importantly, these scale issues had only a weak influence when focusing on regional diversity patterns that change along abiotic gradients. MAIN CONCLUSIONS: Our results call for caution when combining phylogenetic data with distributional data to study how and why communities differ from random expectations of phylogenetic relatedness. These analyses seem to be robust when the focus is on relating community diversity patterns to variation in habitat conditions, such as abiotic gradients. However, if the focus is on identifying relevant assembly rules for local communities, the uncertainty arising from a certain scale choice can be immense. In the latter case, it becomes necessary to test whether emerging patterns are robust to alternative scale choices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The set of optimal matchings in the assignment matrix allows to define a reflexive and symmetric binary relation on each side of the market, the equal-partner binary relation. The number of equivalence classes of the transitive closure of the equal-partner binary relation determines the dimension of the core of the assignment game. This result provides an easy procedure to determine the dimension of the core directly from the entries of the assignment matrix and shows that the dimension of the core is not as much determined by the number of optimal matchings as by their relative position in the assignment matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.