945 resultados para Variational explanation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ce mémoire propose de faire l’analyse épistémologique du pouvoir créateur de la sélection naturelle. L’objectif sera de déterminer en quelle mesure il est légitime ou non de lui attribuer un tel pouvoir. Pour ce faire, il sera question de savoir si l’explication sélectionniste peut répondre à la question de l’origine des formes structurelles du vivant. Au premier chapitre, nous verrons le raisonnement qui mena Darwin à accorder un pouvoir créateur à la sélection naturelle. Nous comprendrons alors qu’un cadre exclusivement darwinien n’est peut-être pas à même de répondre au problème de la nouveauté évolutionnaire. Au deuxième chapitre, nous verrons dans une perspective darwinienne qu’il est possible de conserver l’essence de la théorie darwinienne et d’accorder à la sélection naturelle un pouvoir créateur, bien que deux des piliers darwiniens fondamentaux doivent être remis en question. Au troisième chapitre, nous verrons dans une perspective postdarwinienne que le pouvoir cumulatif de la sélection naturelle n’est peut-être pas à même d’expliquer l’adaptation sur le plan individuel, ce qui remet lourdement en question le pouvoir créateur de la sélection naturelle. Nous comprendrons alors que le débat, entre partisans d’une vision positive et partisans d’une vision négative de la sélection naturelle, dépend peut-être d’un présupposé métaphysique particulier.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the use of randomness extraction and expansion in key agreement (KA) pro- tocols to generate uniformly random keys in the standard model. Although existing works provide the basic theorems necessary, they lack details or examples of appropriate cryptographic primitives and/or parameter sizes. This has lead to the large amount of min-entropy needed in the (non-uniform) shared secret being overlooked in proposals and efficiency comparisons of KA protocols. We therefore summa- rize existing work in the area and examine the security levels achieved with the use of various extractors and expanders for particular parameter sizes. The tables presented herein show that the shared secret needs a min-entropy of at least 292 bits (and even more with more realistic assumptions) to achieve an overall security level of 80 bits using the extractors and expanders we consider. The tables may be used to �nd the min-entropy required for various security levels and assumptions. We also �nd that when using the short exponent theorems of Gennaro et al., the short exponents may need to be much longer than they suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The well-known Easterlin paradox points out that average happiness has remained constant over time despite sharp rises in GNP per head. At the same time, a micro literature has typically found positive correlations between individual income and individual measures of subjective well-being. This paper suggests that these two findings are consistent with the presence of relative income terms in the utility function. Income may be evaluated relative to others (social comparison) or to oneself in the past (habituation). We review the evidence on relative income from the subjective well-being literature. We also discuss the relation (or not) between happiness and utility, and discuss some nonhappiness research (behavioral, experimental, neurological) related to income comparisons. We last consider how relative income in the utility function can affect economic models of behavior in the domains of consumption, investment, economic growth, savings, taxation, labor supply, wages, and migration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent claims of equivalence of animal and human reasoning are evaluated and a study of avian cognition serves as an exemplar of weaknesses in these arguments. It is argued that current research into neurobiological cognition lacks theoretical breadth to substantiate comparative analyses of cognitive function. Evaluation of a greater range of theoretical explanations is needed to verify claims of equivalence in animal and human cognition. We conclude by exemplifying how the notion of affordances in multi-scale dynamics can capture behavior attributed to processes of analogical and inferential reasoning in animals and humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Alliances, with other inter-organisational forms, have become a strategy of choice and necessity for both the private and public sectors. From initial formation, alliances develop and change in different ways, with research suggesting that many alliances will be terminated without their potential value being realised. Alliance process theorists address this phenomenon, seeking explanations as to why alliances unfold the way they do. However, these explanations have generally focussed on economic and structural determinants: empirically, little is known about how and why the agency of alliance actors shapes the alliance path. Theorists have suggested that current alliance process theory has provided valuable, but partial accounts of alliance development, which could be usefully extended by considering social and individual factors. The purpose of this research therefore was to extend alliance process theory by exploring individual agency as an explanation of alliance events and in doing so, reveal the potential of a multi-frame approach for understanding alliance process. Through an historical study of a single, rich case of alliance process, this thesis provided three explanations for the sequence of alliance events, each informed by a different theoretical perspective. The explanatory contribution of the Individual Agency (IA) perspective was distilled through juxtaposition with the perspectives of Environmental Determinism (ED) and Indeterminacy/Chance (I/C). The research produced a number of findings. First, it provided empirical support for the tentative proposition that the choices and practices of alliance actors are partially explanatory of alliance change and that these practices are particular to the alliance context. Secondly, the study found that examining the case through three theoretical frames provided a more complete explanation. Two propositions were put forward as to how individual agency can be theorised within this three-perspective framework. Finally, the case explained which alliance actors were required to shape alliance decision making in this case and why.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research on novice programmers has suggested that they pass through neo-Piagetian stages: sensorimotor, preoperational, and concrete operational stages, before eventually reaching programming competence at the formal operational stage. This paper presents empirical results in support of this neo-Piagetian perspective. The major novel contributions of this paper are empirical results for some exam questions aimed at testing novices for the concrete operational abilities to reason with quantities that are conserved, processes that are reversible, and properties that hold under transitive inference. While the questions we used had been proposed earlier by Lister, he did not present any data for how students performed on these questions. Our empirical results demonstrate that many students struggle to answer these problems, despite the apparent simplicity of these problems. We then compare student performance on these questions with their performance on six explain in plain English questions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigates the gap between the climate change-related corporate governance information being disclosed by companies, and the information sought by stakeholders. To accomplish this objective we utilised previous research on stakeholder demand for information, and we conducted in-depth interviews with six corporate representatives from major Australian emission-intensive companies. Having gained and documented a rich insight into the potential factors responsible for the current gap in disclosure we find that the existence of an expectations gap; the perceived cost of providing commercially sensitive information; the limited accountability being accepted by the corporate managers; and, a lack of stakeholder pressure together contribute to the lack of disclosure. In highlighting the gap in disclosure, this study suggests strategies to reduce the gap in climate change-related corporate governance disclosures.