979 resultados para equatorial von Neumann measurement


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In [4], Guillard and Viozat propose a finite volume method for the simulation of inviscid steady as well as unsteady flows at low Mach numbers, based on a preconditioning technique. The scheme satisfies the results of a single scale asymptotic analysis in a discrete sense and comprises the advantage that this can be derived by a slight modification of the dissipation term within the numerical flux function. Unfortunately, it can be observed by numerical experiments that the preconditioned approach combined with an explicit time integration scheme turns out to be unstable if the time step Dt does not satisfy the requirement to be O(M2) as the Mach number M tends to zero, whereas the corresponding standard method remains stable up to Dt=O(M), M to 0, which results from the well-known CFL-condition. We present a comprehensive mathematical substantiation of this numerical phenomenon by means of a von Neumann stability analysis, which reveals that in contrast to the standard approach, the dissipation matrix of the preconditioned numerical flux function possesses an eigenvalue growing like M-2 as M tends to zero, thus causing the diminishment of the stability region of the explicit scheme. Thereby, we present statements for both the standard preconditioner used by Guillard and Viozat [4] and the more general one due to Turkel [21]. The theoretical results are after wards confirmed by numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente documento constituye un estudio de caso que se desarrolló de acuerdo a los lineamientos planteados en el Plan Nacional de Desarrollo 2010 – 2014 “Prosperidad para todos”, en la que el Gobierno define que se deben otorgar 1.000.000 de soluciones de vivienda a nivel nacional en este periodo presidencial, de las cuales 254.920 soluciones son responsabilidad del Fondo Nacional del Ahorro. Por lo tanto, se analizan las estrategias que ha venido desarrollando el FNA con el propósito de proponer alternativas que permitan a la alta dirección de la entidad tomar decisiones coherentes con los modelos de promoción de vivienda, los cuales han estado alineados con el cumplimiento de los objetivos definidos por el Gobierno Nacional en el eje central de vivienda.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strategic equilibrium of an N-person cooperative game with transferable utility is a system composed of a cover collection of subsets of N and a set of extended imputations attainable through such equilibrium cover. The system describes a state of coalitional bargaining stability where every player has a bargaining alternative against any other player to support his corresponding equilibrium claim. Any coalition in the sable system may form and divide the characteristic value function of the coalition as prescribed by the equilibrium payoffs. If syndicates are allowed to form, a formed coalition may become a syndicate using the equilibrium payoffs as disagreement values in bargaining for a part of the complementary coalition incremental value to the grand coalition when formed. The emergent well known-constant sum derived game in partition function is described in terms of parameters that result from incumbent binding agreements. The strategic-equilibrium corresponding to the derived game gives an equal value claim to all players.  This surprising result is alternatively explained in terms of strategic-equilibrium based possible outcomes by a sequence of bargaining stages that when the binding agreements are in the right sequential order, von Neumann and Morgenstern (vN-M) non-discriminatory solutions emerge. In these solutions a preferred branch by a sufficient number of players is identified: the weaker players syndicate against the stronger player. This condition is referred to as the stronger player paradox.  A strategic alternative available to the stronger players to overcome the anticipated not desirable results is to voluntarily lower his bargaining equilibrium claim. In doing the original strategic equilibrium is modified and vN-M discriminatory solutions may occur, but also a different stronger player may emerge that has eventually will have to lower his equilibrium claim. A sequence of such measures converges to the equal opportunity for all vN-M solution anticipated by the strategic equilibrium of partition function derived game.    [298-words]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El desarrollo de modelos económicos lineales fue uno de los logros más signifi cativos en teoría económica en la Norteamérica de la posguerra. La programación lineal, desarrollada por George B. Dantzig (1947), los modelos de insumo producto de Wassily Leontief (1946) y la teoría de juegos de John. Von Neumann (1944) se constituyeron en tres ramas diferentes de la teoría económica lineal. Sus aplicaciones en variados campos del conocimiento, como la Economía y la Ciencia Política, y en actividades de gestión en la industria y en el gobierno son cada vez más signifi cativas. El objetivo principal de este trabajo es el de presentar un modelo práctico de los procesos de producción típicos de una fábrica o empresa que transforma insumos en productos. El modelo se desarrolla en el contexto y con los conceptos propios de la teoría de modelos económicos lineales, y el enfoque de la investigación de operaciones, también conocido como el de las ciencias de la administración.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the beginning, the world of game-playing by machine has been fortunate in attracting contributions from the leading names of computer science. Charles Babbage, Konrad Zuse, Claude Shannon, Alan Turing, John von Neumann, John McCarthy, Alan Newell, Herb Simon and Ken Thompson all come to mind, and each reader will wish to add to this list. Recently, the Journal has saluted both Claude Shannon and Herb Simon. Ken’s retirement from Lucent Technologies’ Bell Labs to the start-up Entrisphere is also a good moment for reflection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe infinitely scalable pipeline machines with perfect parallelism, in the sense that every instruction of an inline program is executed, on successive data, on every clock tick. Programs with shared data effectively execute in less than a clock tick. We show that pipeline machines are faster than single or multi-core, von Neumann machines for sufficiently many program runs of a sufficiently time consuming program. Our pipeline machines exploit the totality of transreal arithmetic and the known waiting time of statically compiled programs to deliver the interesting property that they need no hardware or software exception handling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On using McKenzie’s taxonomy of optimal accumulation in the longrun, we report a “uniform turnpike” theorem of the third kind in a model original to Robinson, Solow and Srinivasan (RSS), and further studied by Stiglitz. Our results are presented in the undiscounted, discrete-time setting emphasized in the recent work of Khan-Mitra, and they rely on the importance of strictly concave felicity functions, or alternatively, on the value of a “marginal rate of transformation”, ξσ, from one period to the next not being unity. Our results, despite their specificity, contribute to the methodology of intertemporal optimization theory, as developed in economics by Ramsey, von Neumann and their followers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We define a subgame perfect Nash equilibrium under Knightian uncertainty for two players, by means of a recursive backward induction procedure. We prove an extension of the Zermelo-von Neumann-Kuhn Theorem for games of perfect information, i. e., that the recursive procedure generates a Nash equilibrium under uncertainty (Dow and Werlang(1994)) of the whole game. We apply the notion for two well known games: the chain store and the centipede. On the one hand, we show that subgame perfection under Knightian uncertainty explains the chain store paradox in a one shot version. On the other hand, we show that subgame perfection under uncertainty does not account for the leaving behavior observed in the centipede game. This is in contrast to Dow, Orioli and Werlang(1996) where we explain by means of Nash equilibria under uncertainty (but not subgame perfect) the experiments of McKelvey and Palfrey(1992). Finally, we show that there may be nontrivial subgame perfect equilibria under uncertainty in more complex extensive form games, as in the case of the finitely repeated prisoner's dilemma, which accounts for cooperation in early stages of the game.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Prospect Theory is one of the basis of Behavioral Finance and models the investor behavior in a different way than von Neumann and Morgenstern Utility Theory. Behavioral characteristics are evaluated for different control groups, validating the violation of Utility Theory Axioms. Naïve Diversification is also verified, utilizing the 1/n heuristic strategy for investment funds allocations. This strategy causes different fixed and equity allocations, compared to the desirable exposure, given the exposure of the subsample that answered a non constrained allocation question. When compared to non specialists, specialists in finance are less risk averse and allocate more of their wealth on equity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Examina o modelo de seleção de portfólios desenvolvido por Markowitz, principalmente no que concerne: as suas relações com a teoria da utilidade de Von Neumann-Morgenstern; aos algo ritmos de solução do problema de Programação Quadrática paramétrica dele decorrente; a simplificação proporcionada pelo Modelo Diagonal de Sharpe. Mostra que a existência de um título sem risco permite a especificação do Teorema da Separação e a simplificação do problema de seleção de portfólios. Analisa o modelo denominado por CAPM, de equilíbrio no Mercado de Capitais sob condições de incerteza, comparando os processos dedutivos empregados por Lintner e Mossin. Examina as implicações decorrentes do relaxamento dos pressupostos subjacentes ã esse modelo de equilíbrio geral, principalmente a teoria do portfólio Zero-Beta desenvolvida por Black.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho fazemos um breve estudo de Álgebras de Operadores, mais especificamente Álgebras-C* e Álgebras de von Neumann. O objetivo é expor alguns resultados que seriam os análogos não-comutativos de teoremas em Teoria da Medida e Teoria Rrgódica. Inicialmente, enunciamos alguns resultados de Análise Funcional e Teoria Espectral, muitos destes sendo demonstrados, com ênfase especial aos que dizem respeito µas álgebras. Com isso, dispomos das ferramentas necessárias para falarmos de alguns tópicos da então chamada Teoria da Integração Não-Comutativa. Uma desigualdade tipo Jensen é provada e, com o teorema de Radon-Nikodym para funcionais normais positivos, construimos uma esperança condicional, provando que esta possui as mesmas propriedades da esperança condicional da Teoria das Probabilidades. Dada a Esperança Condicional, objeto este que faz parte do cenário atual de pesquisa na área de Álgebra de Operadores e que está relacionado com resultados fundamentais tal como o Índice de Jones, passamos à definição da Entropia de Connes-Stormer. Finalizamos o trabalho analisando esta entropia, que é a versão para as álgebras de von Neumann da entropia Kolmogorov-Sinai em Teoria Ergódica. Provamos algumas pro- priedades que são análogas às do conceito clássico de entropia e indicamos uma aplicação da mesma. O texto não possui resultados originais, trata-se apenas de uma releitura de artigos usando versões mais recentes de alguns teoremas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We define a subgame perfect Nash equilibrium under Knightian uncertainty for two players, by means of a recursive backward induction procedure. We prove an extension of the Zermelo-von Neumann-Kuhn Theorem for games of perfect information, i. e., that the recursive procedure generates a Nash equilibrium under uncertainty (Dow and Werlang(1994)) of the whole game. We apply the notion for two well known games: the chain store and the centipede. On the one hand, we show that subgame perfection under Knightian uncertainty explains the chain store paradox in a one shot version. On the other hand, we show that subgame perfection under uncertainty does not account for the leaving behavior observed in the centipede game. This is in contrast to Dow, Orioli and Werlang(1996) where we explain by means of Nash equilibria under uncertainty (but not subgame perfect) the experiments of McKelvey and Palfrey(1992). Finally, we show that there may be nontrivial subgame perfect equilibria under uncertainty in more complex extensive form games, as in the case of the finitely repeated prisoner's dilemma, which accounts for cooperation in early stages of the game .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We characterize optimal policy in a two-sector growth model with xed coeÆcients and with no discounting. The model is a specialization to a single type of machine of a general vintage capital model originally formulated by Robinson, Solow and Srinivasan, and its simplicity is not mirrored in its rich dynamics, and which seem to have been missed in earlier work. Our results are obtained by viewing the model as a specific instance of the general theory of resource allocation as initiated originally by Ramsey and von Neumann and brought to completion by McKenzie. In addition to the more recent literature on chaotic dynamics, we relate our results to the older literature on optimal growth with one state variable: speci cally, to the one-sector setting of Ramsey, Cass and Koopmans, as well as to the two-sector setting of Srinivasan and Uzawa. The analysis is purely geometric, and from a methodological point of view, our work can be seen as an argument, at least in part, for the rehabilitation of geometric methods as an engine of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this article is to test the hypothesis that utility preferences that incorporate asymmetric reactions between gains and losses generate better results than the classic Von Neumann-Morgenstern utility functions in the Brazilian market. The asymmetric behavior can be computed through the introduction of a disappointment (or loss) aversion coefficient in the classical expected utility function, which increases the impact of losses against gains. The results generated by both traditional and loss aversion utility functions are compared with real data from the Brazilian market regarding stock market participation in the investment portfolio of pension funds and individual investors.