983 resultados para mesure de von Neumann réelle


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strategic equilibrium of an N-person cooperative game with transferable utility is a system composed of a cover collection of subsets of N and a set of extended imputations attainable through such equilibrium cover. The system describes a state of coalitional bargaining stability where every player has a bargaining alternative against any other player to support his corresponding equilibrium claim. Any coalition in the sable system may form and divide the characteristic value function of the coalition as prescribed by the equilibrium payoffs. If syndicates are allowed to form, a formed coalition may become a syndicate using the equilibrium payoffs as disagreement values in bargaining for a part of the complementary coalition incremental value to the grand coalition when formed. The emergent well known-constant sum derived game in partition function is described in terms of parameters that result from incumbent binding agreements. The strategic-equilibrium corresponding to the derived game gives an equal value claim to all players.  This surprising result is alternatively explained in terms of strategic-equilibrium based possible outcomes by a sequence of bargaining stages that when the binding agreements are in the right sequential order, von Neumann and Morgenstern (vN-M) non-discriminatory solutions emerge. In these solutions a preferred branch by a sufficient number of players is identified: the weaker players syndicate against the stronger player. This condition is referred to as the stronger player paradox.  A strategic alternative available to the stronger players to overcome the anticipated not desirable results is to voluntarily lower his bargaining equilibrium claim. In doing the original strategic equilibrium is modified and vN-M discriminatory solutions may occur, but also a different stronger player may emerge that has eventually will have to lower his equilibrium claim. A sequence of such measures converges to the equal opportunity for all vN-M solution anticipated by the strategic equilibrium of partition function derived game.    [298-words]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El desarrollo de modelos económicos lineales fue uno de los logros más signifi cativos en teoría económica en la Norteamérica de la posguerra. La programación lineal, desarrollada por George B. Dantzig (1947), los modelos de insumo producto de Wassily Leontief (1946) y la teoría de juegos de John. Von Neumann (1944) se constituyeron en tres ramas diferentes de la teoría económica lineal. Sus aplicaciones en variados campos del conocimiento, como la Economía y la Ciencia Política, y en actividades de gestión en la industria y en el gobierno son cada vez más signifi cativas. El objetivo principal de este trabajo es el de presentar un modelo práctico de los procesos de producción típicos de una fábrica o empresa que transforma insumos en productos. El modelo se desarrolla en el contexto y con los conceptos propios de la teoría de modelos económicos lineales, y el enfoque de la investigación de operaciones, también conocido como el de las ciencias de la administración.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the beginning, the world of game-playing by machine has been fortunate in attracting contributions from the leading names of computer science. Charles Babbage, Konrad Zuse, Claude Shannon, Alan Turing, John von Neumann, John McCarthy, Alan Newell, Herb Simon and Ken Thompson all come to mind, and each reader will wish to add to this list. Recently, the Journal has saluted both Claude Shannon and Herb Simon. Ken’s retirement from Lucent Technologies’ Bell Labs to the start-up Entrisphere is also a good moment for reflection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe infinitely scalable pipeline machines with perfect parallelism, in the sense that every instruction of an inline program is executed, on successive data, on every clock tick. Programs with shared data effectively execute in less than a clock tick. We show that pipeline machines are faster than single or multi-core, von Neumann machines for sufficiently many program runs of a sufficiently time consuming program. Our pipeline machines exploit the totality of transreal arithmetic and the known waiting time of statically compiled programs to deliver the interesting property that they need no hardware or software exception handling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On using McKenzie’s taxonomy of optimal accumulation in the longrun, we report a “uniform turnpike” theorem of the third kind in a model original to Robinson, Solow and Srinivasan (RSS), and further studied by Stiglitz. Our results are presented in the undiscounted, discrete-time setting emphasized in the recent work of Khan-Mitra, and they rely on the importance of strictly concave felicity functions, or alternatively, on the value of a “marginal rate of transformation”, ξσ, from one period to the next not being unity. Our results, despite their specificity, contribute to the methodology of intertemporal optimization theory, as developed in economics by Ramsey, von Neumann and their followers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We define a subgame perfect Nash equilibrium under Knightian uncertainty for two players, by means of a recursive backward induction procedure. We prove an extension of the Zermelo-von Neumann-Kuhn Theorem for games of perfect information, i. e., that the recursive procedure generates a Nash equilibrium under uncertainty (Dow and Werlang(1994)) of the whole game. We apply the notion for two well known games: the chain store and the centipede. On the one hand, we show that subgame perfection under Knightian uncertainty explains the chain store paradox in a one shot version. On the other hand, we show that subgame perfection under uncertainty does not account for the leaving behavior observed in the centipede game. This is in contrast to Dow, Orioli and Werlang(1996) where we explain by means of Nash equilibria under uncertainty (but not subgame perfect) the experiments of McKelvey and Palfrey(1992). Finally, we show that there may be nontrivial subgame perfect equilibria under uncertainty in more complex extensive form games, as in the case of the finitely repeated prisoner's dilemma, which accounts for cooperation in early stages of the game.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Prospect Theory is one of the basis of Behavioral Finance and models the investor behavior in a different way than von Neumann and Morgenstern Utility Theory. Behavioral characteristics are evaluated for different control groups, validating the violation of Utility Theory Axioms. Naïve Diversification is also verified, utilizing the 1/n heuristic strategy for investment funds allocations. This strategy causes different fixed and equity allocations, compared to the desirable exposure, given the exposure of the subsample that answered a non constrained allocation question. When compared to non specialists, specialists in finance are less risk averse and allocate more of their wealth on equity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Examina o modelo de seleção de portfólios desenvolvido por Markowitz, principalmente no que concerne: as suas relações com a teoria da utilidade de Von Neumann-Morgenstern; aos algo ritmos de solução do problema de Programação Quadrática paramétrica dele decorrente; a simplificação proporcionada pelo Modelo Diagonal de Sharpe. Mostra que a existência de um título sem risco permite a especificação do Teorema da Separação e a simplificação do problema de seleção de portfólios. Analisa o modelo denominado por CAPM, de equilíbrio no Mercado de Capitais sob condições de incerteza, comparando os processos dedutivos empregados por Lintner e Mossin. Examina as implicações decorrentes do relaxamento dos pressupostos subjacentes ã esse modelo de equilíbrio geral, principalmente a teoria do portfólio Zero-Beta desenvolvida por Black.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho fazemos um breve estudo de Álgebras de Operadores, mais especificamente Álgebras-C* e Álgebras de von Neumann. O objetivo é expor alguns resultados que seriam os análogos não-comutativos de teoremas em Teoria da Medida e Teoria Rrgódica. Inicialmente, enunciamos alguns resultados de Análise Funcional e Teoria Espectral, muitos destes sendo demonstrados, com ênfase especial aos que dizem respeito µas álgebras. Com isso, dispomos das ferramentas necessárias para falarmos de alguns tópicos da então chamada Teoria da Integração Não-Comutativa. Uma desigualdade tipo Jensen é provada e, com o teorema de Radon-Nikodym para funcionais normais positivos, construimos uma esperança condicional, provando que esta possui as mesmas propriedades da esperança condicional da Teoria das Probabilidades. Dada a Esperança Condicional, objeto este que faz parte do cenário atual de pesquisa na área de Álgebra de Operadores e que está relacionado com resultados fundamentais tal como o Índice de Jones, passamos à definição da Entropia de Connes-Stormer. Finalizamos o trabalho analisando esta entropia, que é a versão para as álgebras de von Neumann da entropia Kolmogorov-Sinai em Teoria Ergódica. Provamos algumas pro- priedades que são análogas às do conceito clássico de entropia e indicamos uma aplicação da mesma. O texto não possui resultados originais, trata-se apenas de uma releitura de artigos usando versões mais recentes de alguns teoremas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We define a subgame perfect Nash equilibrium under Knightian uncertainty for two players, by means of a recursive backward induction procedure. We prove an extension of the Zermelo-von Neumann-Kuhn Theorem for games of perfect information, i. e., that the recursive procedure generates a Nash equilibrium under uncertainty (Dow and Werlang(1994)) of the whole game. We apply the notion for two well known games: the chain store and the centipede. On the one hand, we show that subgame perfection under Knightian uncertainty explains the chain store paradox in a one shot version. On the other hand, we show that subgame perfection under uncertainty does not account for the leaving behavior observed in the centipede game. This is in contrast to Dow, Orioli and Werlang(1996) where we explain by means of Nash equilibria under uncertainty (but not subgame perfect) the experiments of McKelvey and Palfrey(1992). Finally, we show that there may be nontrivial subgame perfect equilibria under uncertainty in more complex extensive form games, as in the case of the finitely repeated prisoner's dilemma, which accounts for cooperation in early stages of the game .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We characterize optimal policy in a two-sector growth model with xed coeÆcients and with no discounting. The model is a specialization to a single type of machine of a general vintage capital model originally formulated by Robinson, Solow and Srinivasan, and its simplicity is not mirrored in its rich dynamics, and which seem to have been missed in earlier work. Our results are obtained by viewing the model as a specific instance of the general theory of resource allocation as initiated originally by Ramsey and von Neumann and brought to completion by McKenzie. In addition to the more recent literature on chaotic dynamics, we relate our results to the older literature on optimal growth with one state variable: speci cally, to the one-sector setting of Ramsey, Cass and Koopmans, as well as to the two-sector setting of Srinivasan and Uzawa. The analysis is purely geometric, and from a methodological point of view, our work can be seen as an argument, at least in part, for the rehabilitation of geometric methods as an engine of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this article is to test the hypothesis that utility preferences that incorporate asymmetric reactions between gains and losses generate better results than the classic Von Neumann-Morgenstern utility functions in the Brazilian market. The asymmetric behavior can be computed through the introduction of a disappointment (or loss) aversion coefficient in the classical expected utility function, which increases the impact of losses against gains. The results generated by both traditional and loss aversion utility functions are compared with real data from the Brazilian market regarding stock market participation in the investment portfolio of pension funds and individual investors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho apresenta uma extensão do provador haRVey destinada à verificação de obrigações de prova originadas de acordo com o método B. O método B de desenvolvimento de software abrange as fases de especificação, projeto e implementação do ciclo de vida do software. No contexto da verificação, destacam-se as ferramentas de prova Prioni, Z/EVES e Atelier-B/Click n Prove. Elas descrevem formalismos com suporte à checagem satisfatibilidade de fórmulas da teoria axiomática dos conjuntos, ou seja, podem ser aplicadas ao método B. A checagem de SMT consiste na checagem de satisfatibilidade de fórmulas da lógica de primeira-ordem livre de quantificadores dada uma teoria decidível. A abordagem de checagem de SMT implementada pelo provador automático de teoremas haRVey é apresentada, adotando-se a teoria dos vetores que não permite expressar todas as construções necessárias às especificações baseadas em conjuntos. Assim, para estender a checagem de SMT para teorias dos conjuntos destacam-se as teorias dos conjuntos de Zermelo-Frankel (ZFC) e de von Neumann-Bernays-Gödel (NBG). Tendo em vista que a abordagem de checagem de SMT implementada no haRVey requer uma teoria finita e pode ser estendida para as teorias nãodecidíveis, a teoria NBG apresenta-se como uma opção adequada para a expansão da capacidade dedutiva do haRVey à teoria dos conjuntos. Assim, através do mapeamento dos operadores de conjunto fornecidos pela linguagem B a classes da teoria NBG, obtem-se uma abordagem alternativa para a checagem de SMT aplicada ao método B