895 resultados para Formal theory of the conflict of interests
Resumo:
The general theory of nonlinear relaxation times is developed for the case of Gaussian colored noise. General expressions are obtained and applied to the study of the characteristic decay time of unstable states in different situations, including white and colored noise, with emphasis on the distributed initial conditions. Universal effects of the coupling between colored noise and random initial conditions are predicted.
Resumo:
A theory is presented to explain the statistical properties of the growth of dye-laser radiation. Results are in agreement with recent experimental findings. The different roles of pump-noise intensity and correlation time are elucidated.
Resumo:
In this Contribution we show that a suitably defined nonequilibrium entropy of an N-body isolated system is not a constant of the motion, in general, and its variation is bounded, the bounds determined by the thermodynamic entropy, i.e., the equilibrium entropy. We define the nonequilibrium entropy as a convex functional of the set of n-particle reduced distribution functions (n ? N) generalizing the Gibbs fine-grained entropy formula. Additionally, as a consequence of our microscopic analysis we find that this nonequilibrium entropy behaves as a free entropic oscillator. In the approach to the equilibrium regime, we find relaxation equations of the Fokker-Planck type, particularly for the one-particle distribution function.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.
Resumo:
Tutkimuksessa haluttiin selvittää, mitkä tekijät vaikuttavat ravintoon liittyvän suositusmerkin käyttöön kuluttajan ostopäätöksen apuna. Tutkimuksen viitekehyksen pohjaksi valittiin suunnitellun toiminnan teoria, joka on osoittautunut selittämään hyvin useaa ravintoon- ja terveyteen liittyvää käyttäytymistä. Tutkimustoteutettiin kyselytutkimuksella, jonka aineisto kerättiin Internetissä julkaistulla kyselylomakkeella. Tulokset osoittivat, että kuluttajan aikomus käyttää suositusmerkkiä oli mallin vahvin selittäjä. Lisäksi merkin käyttöä selitti kuluttajan kokema sisäinen kontrolli, johon ulkoinen kontrolli vahvasti vaikuttaa. Ostopäätössitoutumisen havaittiin vaikuttavan aikomuksen ja todellisen merkin käytön väliseen yhteyteen. Yleisesti tulokset osoittivat, että kuluttajilla on aikomusta käyttää merkkiä, mutta todellinen käyttö on huomattavasti vähäisempää.
Resumo:
In this paper we present a model of representative behavior in the dictator game. Individuals have simultaneous and non-contradictory preferences over monetary payoffs, altruistic actions and equity concerns. We require that these behaviors must be aggregated and founded in principles of representativeness and empathy. The model results match closely the observed mean split and replicate other empirical regularities (for instance, higher stakes reduce the willingness to give). In addition, we connect representative behavior with an allocation rule built on psychological and behavioral arguments. An approach consistently neglected in this literature. Key words: Dictator Game, Behavioral Allocation Rules, Altruism, Equity Concerns, Empathy, Self-interest JEL classification: C91, D03, D63, D74.
Resumo:
The effect of the heat flux on the rate of chemical reaction in dilute gases is shown to be important for reactions characterized by high activation energies and in the presence of very large temperature gradients. This effect, obtained from the second-order terms in the distribution function (similar to those obtained in the Burnett approximation to the solution of the Boltzmann equation), is derived on the basis of information theory. It is shown that the analytical results describing the effect are simpler if the kinetic definition for the nonequilibrium temperature is introduced than if the thermodynamic definition is introduced. The numerical results are nearly the same for both definitions
Resumo:
In his version of the theory of multicomponent systems, Friedman used the analogy which exists between the virial expansion for the osmotic pressure obtained from the McMillan-Mayer (MM) theory of solutions in the grand canonical ensemble and the virial expansion for the pressure of a real gas. For the calculation of the thermodynamic properties of the solution, Friedman proposed a definition for the"excess free energy" that is a reminder of the ancient idea for the"osmotic work". However, the precise meaning to be attached to his free energy is, within other reasons, not well defined because in osmotic equilibrium the solution is not a closed system and for a given process the total amount of solvent in the solution varies. In this paper, an analysis based on thermodynamics is presented in order to obtain the exact and precise definition for Friedman"s excess free energy and its use in the comparison with the experimental data.
Resumo:
Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.