967 resultados para optimal foraging theory
Resumo:
In this paper we examine the properties of a hybrid auction that combines a sealed bid and an ascending auction. In this auction, each bidder submits a sealed bid. Once the highest bid is known, the bidder who submitted it is declared the winner if her bid is higher than the second highest by more than a predetermined amount or percentage. If at least one more bidder submitted a bid su¢ciently close to the highest bid (that is, if the di¤erence between this bid and the highest bid is smaller than the predetermined amount or percentage) the quali…ed buyers compete in an open ascending auction that has the highest bid of the …rst stage as the reserve price. Quali…ed bidders include not only the highest bidder in the …rst stage but also those who bid close enough to her. We show that this auction generates more revenue than a standard auction. Although this hybrid auction does not generate as much revenue as the optimal auction, it is ex-post e¢cient.
Resumo:
We develop a theory of public versus private ownership based on value diversion by managers. Government is assumed to face stronger institutional constraints than has been assumed in previous literature. The model which emerges from these assumptions is fexible and has wide application. We provide amapping between the qualitative characteristics of an asset, its main use - including public goods characteristics, and spillovers toother assets values - and the optimal ownership and management regime. The model is applied to single and multiple related assets. We address questions such as; when is it optimal to have one of a pair ofr elated assets public and the other private; when is joint management desirable; and when should a public asset be managed by the owner of a related private asset? We show that while private ownership can be judged optimal in some cases solely on the basis of qualitative information, the optimality of any other ownership and management regimes relies on quantitative analysis. Our results reveal the situations in which policy makers will have difficulty in determining the opimal regime.
Resumo:
Esta tese de Doutorado é dedicada ao estudo de instabilidade financeira e dinâmica em Teoria Monet ária. E demonstrado que corridas banc árias são eliminadas sem custos no modelo padrão de teoria banc ária quando a popula ção não é pequena. É proposta uma extensão em que incerteza agregada é mais severa e o custo da estabilidade financeira é relevante. Finalmente, estabelece-se otimalidade de transições na distribui ção de moeda em economias em que oportunidades de trocas são escassas e heterogêneas. Em particular, otimalidade da inflação depende dos incentivos dinâmicos proporcionados por tais transi ções. O capí tulo 1 estabelece o resultado de estabilidade sem custos para economias grandes ao estudar os efeitos do tamanho populacional na an álise de corridas banc árias de Peck & Shell. No capí tulo 2, otimalidade de dinâmica é estudada no modelo de monet ário de Kiyotaki & Wright quando a sociedade é capaz de implementar uma polí tica inflacion ária. Apesar de adotar a abordagem de desenho de mecanismos, este capí tulo faz um paralelo com a an álise de Sargent & Wallace (1981) ao destacar efeitos de incentivos dinâmicos sobre a interação entre as polí ticas monet ária e fiscal. O cap ítulo 3 retoma o tema de estabilidade fi nanceira ao quanti car os custos envolvidos no desenho ótimo de um setor bancário à prova de corridas e ao propor uma estrutura informacional alternativa que possibilita bancos insolventes. A primeira an álise mostra que o esquema de estabilidade ótima exibe altas taxas de juros de longo prazo e a segunda que monitoramento imperfeito pode levar a corridas bancárias com insolvência.
Resumo:
Esta tese contém dois capítulos, cada um lidando com a teoria e a história dos bancos e arranjos financeiros. No capítulo 1, busca-se extender uma economia Diamond-Dybvig com monitoramento imperfeito dos saques antecipados e realizar uma comparação do bem estar social em cada uma das alocações possíveis, como proposto em Presscott and Weinberg(2003). Esse monitoramento imperfeito é implementado a partir da comunicação indireta ( através de um meio de pagamento) entre os agentes e a máquina de depósitos e saques que é um agregado do setor produtivo e financeiro. A extensão consiste em estudar alocações onde uma fração dos agentes pode explorar o monitoramento imperfeito e fraudar a alocação contratada ao consumirem mais cedo além do limite, usando múltiplos meios de pagamento. Com a punição limitada no período de consumo tardio, essa nova alocação pode ser chamada de uma alocação separadora em contraste com as alocações agregadoras onde o agente com habilidade de fraudar é bloqueado por um meio de pagamento imune a fraude, mas custoso, ou por receber consumo futuro suficiente para tornar a fraude desinteressante. A comparação de bem estar na gama de parâmetros escolhida mostra que as alocações separadoras são ótimas para as economias com menor dotação e as agregadoras para as de nível intermediário e as ricas. O capítulo termina com um possível contexto histórico para o modelo, o qual se conecta com a narrativa histórica encontrada no capítulo 2. No capítulo 2 são exploradas as propriedade quantitativas de um sistema de previsão antecedente para crises financeiras, com as váriaveis sendo escolhidas a partir de um arcabouço de ``boom and bust'' descrito mais detalhadamente no apêndice 1. As principais variáveis são: o crescimento real nos preços de imóveis e ações, o diferencial entre os juros dos títulos governamentais de 10 anos e a taxa de 3 meses no mercado inter-bancário e o crescimento nos ativos totais do setor bancário. Essas variáveis produzem uma taxa mais elevada de sinais corretos para as crises bancárias recentes (1984-2008) do que os sistemas de indicadores antecedentes comparáveis. Levar em conta um risco de base crescente ( devido à tendência de acumulação de distorções no sistema de preços relativos em expansões anteriores) também provê informação e eleva o número de sinais corretos em países que não passaram por uma expansão creditícia e nos preços de ativos tão vigorosa.
Resumo:
We characterize optimal policy in a two-sector growth model with xed coeÆcients and with no discounting. The model is a specialization to a single type of machine of a general vintage capital model originally formulated by Robinson, Solow and Srinivasan, and its simplicity is not mirrored in its rich dynamics, and which seem to have been missed in earlier work. Our results are obtained by viewing the model as a specific instance of the general theory of resource allocation as initiated originally by Ramsey and von Neumann and brought to completion by McKenzie. In addition to the more recent literature on chaotic dynamics, we relate our results to the older literature on optimal growth with one state variable: speci cally, to the one-sector setting of Ramsey, Cass and Koopmans, as well as to the two-sector setting of Srinivasan and Uzawa. The analysis is purely geometric, and from a methodological point of view, our work can be seen as an argument, at least in part, for the rehabilitation of geometric methods as an engine of analysis.
Resumo:
We extend the static portfolio choice problem with a small background risk to the case of small partially correlated background risks. We show that respecting the theories under which risk substitution appears, except for the independence of background risk, it is perfectly rational for the individual to increase his optimal exposure to portfolio risk when risks are partially negatively correlated. Then, we test empirically the hypothesis of risk substitutability using INSEE data on French households. We find that households respond by increasing their stockholdings in response to the increase in future earnings uncertainty. This conclusion is in contradiction with results obtained in other countries. So, in light of these results, our model provides an explanation to account for the lack of empirical consensus on cross-country tests of risk substitution theory that encompasses and criticises all of them.
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.
Resumo:
Nesse artigo, eu desenvolvo e analiso um modelo de dois perí odos em que dois polí ticos competem pela preferência de um eleitor representativo, que sabe quão benevolente é um dos polí ticos mas é imperfeitamente informado sobre quão benevolente é o segundo polí tico. O polí tico conhecido é interpretado como um incumbente de longo prazo, ao passo que o polí tico desconhecido é interpretado como um desa fiante menos conhecido. É estabelecido que o mecanismo de provisão de incentivos inerente às elei cões - que surge através da possibilidade de não reeleger um incumbente - e considerações acerca de aquisi cão de informa cão por parte do eleitor se combinam de modo a determinar que em qualquer equilí brio desse jogo o eleitor escolhe o polí tico desconhecido no per íodo inicial do modelo - uma a cão à qual me refi ro como experimenta cão -, fornecendo assim uma racionaliza cão para a não reelei cão de incumbentes longevos. Especifi camente, eu mostro que a decisão do eleitor quanto a quem eleger no per odo inicial se reduz à compara cão entre os benefí cios informacionais de escolher o polí tico desconhecido e as perdas econômicas de fazê-lo. Os primeiros, que capturam as considera cões relacionadas à aquisi cão de informa cão, são mostrados serem sempre positivos, ao passo que as últimas, que capturam o incentivo à boa performance, são sempre não-negativas, implicando que é sempre ótimo para o eleitor escolher o polí tico desconhecido no per íodo inicial.
Resumo:
This paper proposes a simple OLG model which is consistent with the essential facts about consumer behavior, capital accumulation and wealth distribution, and yields some new and surprising conclusions about fiscal policy. By considering a society in which individuais are distinguished according to two characteristics, altruism and wealth preference, we show that those who in the long run hold the bulk of private capital are not so rnuch motivated by dynastic altruism as by preference for wealth. Two types of social segmentation can result with different wcalth distribution. To a large extcnt our results seem to fit reality better than those obtained with standard optimal growth models in which dynastic altruism ( or r ate o f impatience) is the only source of heterogeneity: overaccumulation can appear, public debt and unfunded pensions are not neutra!, estate taxation can improve the welfare of the top wealthy.
Resumo:
The determination of a specific orbit and the procedure to calculate orbital maneuvers of artificial satellites are problems of extreme importance in the study of orbital mechanics. Therefore, the transferring problem of a spaceship from one orbit to another, and the attention due to this subject has in increased during the last years. Many applications can be found in several space activities, for example, to put a satellite in a geostationary orbit, to change the position of a spaceship, to maintain a specific satellite's orbit, in the design of an interplanetary mission, and others. The Brazilian Satellite SCD-1 (Data Collecting Satellite) will be used as example in this paper. It is the first satellite developed entirely in Brazil, and it remains in operation to this date. SCD-1 was designed, developed, built, and tested by Brazilian scientists, engineers, and technicians working at INPE (National Institute for Space Research, and in Brazilian Industries. During the lifetime, it might be necessary do some complementary maneuvers, being this one either an orbital transferring, or just to make periodical corrections. The purpose of transferring problem is to change the position, velocity and the satellite's mass to a new pre determined state. This transfer can be totally linked (in the case of "Rendezvous") or partially free (free time, free final velocity, etc). In the global case, the direction, the orientation and the magnitude of the thrust to be applied must be chosen, respecting the equipment's limit. In order to make this transferring, either sub-optimal or optimal maneuvers may be used. In the present study, only the sub-optimal will be shown. Hence, this method will simplify the direction of thrust application, to allow a fast calculation that may be used in real time, with a very fast processing. The thrust application direction to be applied will be assumed small and constant, and the purpose of this paper is to find the time interval that the thrust is applied. This paper is basically divided into three parts: during the first one the sub-optimal maneuver is explained and detailed, the second presents the Satellite SCD-1, and finally the last part shows the results using the sub-optimal maneuver applied to the Brazilian Satellite.
Resumo:
The problem of signal tracking, in the presence of a disturbance signal in the plant, is solved using a zero-variation methodology. A state feedback controller is designed in order to minimise the H-2-norm of the closed-loop system, such that the effect of the disturbance is attenuated. Then, a state estimator is designed and the modification of the zeros is used to minimise the H-infinity-norm from the reference input signal to the error signal. The error is taken to be the difference between the reference and the output signals, thereby making it a tracking problem. The design is formulated in a linear matrix inequality framework, such that the optimal solution of the stated control problem is obtained. Practical examples illustrate the effectiveness of the proposed method.
Resumo:
This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)