931 resultados para Cournot equilibrium, non-cooperative oligopoly, quasi-competitiveness, stability
Resumo:
The goal of t.his paper is to show the possibility of a non-monot.one relation between coverage and risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuou.'l parameter which is correlated with lenience and, for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cost of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and implies a positive correlation between coverage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the sep be broken, but also the monotonicity of contracts, i.e., the prediction that high (Iow) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case t,here are some coverage leveIs associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation bet,ween coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to disentangle single crossing and non single crossing under an ex-post zero correlation result: the monotonicity of coverage as a function of riskiness. Since by controlling for risk aversion (no asymmetric informat, ion), coverage is a monotone function of riskiness, this also gives a test for asymmetric information. Finally, we relate this theoretical results to empirica! tests in the recent literature, specially the Dionne, Gouriéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variabIes (asymmetric information) can bias the sign of the correlation of equilibrium variabIes conditioning on ali observabIe variabIes. We show that this may be t,he case when the omitted variabIes have a non-monotonic reIation with t,he observable ones. Moreover, because this non-monotonic reIat,ion is deepIy reIated with the failure of the SCP in one-dimensional screening problems, the existing lit.erature on asymmetric information does not capture t,his feature. Hence, our main result is to point Out the importance of t,he SCP in testing predictions of the hidden information models.
Resumo:
Using an example, we study the analogs, for the differentiated product case, of the Cournot and Bertrand equilibria. These equilibria can be shown to exist and be uni que if we impose a sim pie and natural restriction on the elasticities of the demand functions for the differentiated products. Our characterizations of these equilibria make it possible to compare them and to determine how they are affected by the size of the market and the number of firms. We are also able to prove the existence of Cournot free-entry equilibria in which the number of firms is determined endogenously. In addition, we are able to prove that, in a large market, the Cournot free-entry equilibria approximate the Dixit-Stiglitz monopolistically competitive equilibria. The free-entry equilibrium concept we study is an analog of the one studied by Novshek for the case of firms selling products that are perfect substitutes. Our results are extensions of Novshek's. While we were unable to establish a general existence result for Bertrand free-entry equilibria, we were able to prove that, when these equilibria exist, they are unique and that in large markets they also approximate the Dixit-Stiglitz equilibria.
Resumo:
The paper analyzes a two period general equilibrium model with individual risk and moral hazard. Each household faces two individual states of nature in the second period. These states solely differ in the household's vector of initial endowments, which is strictly larger in the first state (good state) than in the second state (bad state). In the first period households choose a non-observable action. Higher leveis of action give higher probability of the good state of nature to occur, but lower leveIs of utility. Households have access to an insurance market that allows transfer of income across states of oature. I consider two models of financiaI markets, the price-taking behavior model and the nonlínear pricing modelo In the price-taking behavior model suppliers of insurance have a belief about each household's actíon and take asset prices as given. A variation of standard arguments shows the existence of a rational expectations equilibrium. For a generic set of economies every equilibrium is constraíned sub-optímal: there are commodity prices and a reallocation of financiaI assets satisfying the first period budget constraint such that, at each household's optimal choice given those prices and asset reallocation, markets clear and every household's welfare improves. In the nonlinear pricing model suppliers of insurance behave strategically offering nonlinear pricing contracts to the households. I provide sufficient conditions for the existence of equilibrium and investigate the optimality properties of the modeI. If there is a single commodity then every equilibrium is constrained optimaI. Ir there is more than one commodity, then for a generic set of economies every equilibrium is constrained sub-optimaI.
Resumo:
We evaluate the forecasting performance of a number of systems models of US shortand long-term interest rates. Non-linearities, induding asymmetries in the adjustment to equilibrium, are shown to result in more accurate short horizon forecasts. We find that both long and short rates respond to disequilibria in the spread in certain circumstances, which would not be evident from linear representations or from single-equation analyses of the short-term interest rate.
Resumo:
Esta tese é constituída por três ensaios. O primeiro ensaio analisa a informação pública disponível sobre o risco das carteiras de crédito dos bancos brasileiros, sendo dividido em dois capítulos. O primeiro analisa a limitação da informação pública disponibilizada pelos bancos e pelo Banco Central, quando comparada a informação gerencial disponível internamente pelos bancos. Concluiu-se que existe espaço para o aumento da transparência na divulgação das informações, fato que vem ocorrendo gradativamente no Brasil através de novas normas relacionadas ao Pilar 3 de Basileia II e à divulgação de informações mais detalhas pelo Bacen, como, por exemplo, aquelas do “Top50” . A segunda parte do primeiro ensaio mostra a discrepância entre o índice de inadimplência contábil (NPL) e a probabilidade de inadimplência (PD) e também discute a relação entre provisão e perda esperada. Através da utilização de matrizes de migração e de uma simulação baseada na sobreposição de safras de carteira de crédito de grandes bancos, concluiu-se que o índice de inadimplência subestima a PD e que a provisão constituída pelos bancos é menor que a perda esperada do SFN. O segundo ensaio relaciona a gestão de risco à discriminação de preço. Foi desenvolvido um modelo que consiste em um duopólio de Cournot em um mercado de crédito de varejo, em que os bancos podem realizar discriminação de terceiro grau. Neste modelo, os potenciais tomadores de crédito podem ser de dois tipos, de baixo ou de alto risco, sendo que tomadores de baixo risco possuem demanda mais elástica. Segundo o modelo, se o custo para observar o tipo do cliente for alto, a estratégia dos bancos será não discriminar (pooling equilibrium). Mas, se este custo for suficientemente baixo, será ótimo para os bancos cobrarem taxas diferentes para cada grupo. É argumentado que o Acordo de Basileia II funcionou como um choque exógeno que deslocou o equilíbrio para uma situação com maior discriminação. O terceiro ensaio é divido em dois capítulos. O primeiro discute a aplicação dos conceitos de probabilidade subjetiva e incerteza Knigthiana a modelos de VaR e a importância da avaliação do “risco de modelo”, que compreende os riscos de estimação, especificação e identificação. O ensaio propõe que a metodologia dos “quatro elementos” de risco operacional (dados internos, externos, ambiente de negócios e cenários) seja estendida à mensuração de outros riscos (risco de mercado e risco de crédito). A segunda parte deste último ensaio trata da aplicação do elemento análise de cenários para a mensuração da volatilidade condicional nas datas de divulgação econômica relevante, especificamente nos dias de reuniões do Copom.
Resumo:
This paper looks into the cashew nut industrial process as a factor in adding value to it and involves its productions, industrialization and marketing. Its competitiveness fundamentally depends in its ability to surpass technological and non technological difficulties that contributes to increase costs and diminishes attributes to better qualities which means values to the market. The methodology applied in this paper was application of a questionnaire with a Likert model scale with closed questions and constituted by variables that composed nominated groups: exporting obstacles, market strategy, competitive advantages, broken nuts index, productive potential and social-economic profile of members of the Cooperative. A descriptive analysis was the method employed for the data analysis. After identifying some of the quantitative gains in the industrial process of the cashew nuts, recommendations are presented to COOPERCAJU to promote courses to improve the members productivity as well as technical assistance, in order to get more efficacy in the cashew nuts industrial processing
Resumo:
Cellulose was extracted from lignocellulosic fibers and nanocrystalline cellulose (NC) prepared by alkali treatment of the fiber, steam explosion of the mercerized fiber, bleaching of the steam exploded fiber and finally acid treatment by 5% oxalic acid followed again by steam explosion. The average length and diameter of the NC were between 200-250 nm and 4-5 nm, respectively, in a monodisperse distribution. Different concentrations of the NC (0.1, 0.5, 1.0, 1.5, 2.0 and 2.5% by weight) were dispersed non-covalently into a completely bio-based thermoplastic polyurethane (TPU) derived entirely from oleic acid. The physical properties of the TPU nanocomposites were assessed by Fourier Transform Infra-Red spectroscopy (FTIR), Thermo-Gravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC), X-Ray Diffraction (XRD), Dynamic Mechanical Analysis (DMA) and Mechanical Properties Analysis. The nanocomposites demonstrated enhanced stress and elongation at break and improved thermal stability compared to the neat TPU. The best results were obtained with 0.5% of NC in the TPU. The elongation at break of this sample was improved from 178% to 269% and its stress at break from 29.3 to 40.5 MPa. In this and all other samples the glass transition temperature, melting temperature and crystallization behavior were essentially unaffected. This finding suggests a potential method of increasing the strength and the elongation at break of typically brittle and weak lipid-based TPUs without alteration of the other physico-chemical properties of the polymer. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A neural model for solving nonlinear optimization problems is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The network is shown to be completely stable and globally convergent to the solutions of nonlinear optimization problems. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are presented to validate the developed methodology.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The power system stability analysis is approached taking into explicit account the dynamic performance of generators internal voltages and control devices. The proposed method is not a direct method in the usual sense since conclusion for stability or instability is not exclusively based on energy function considerations but it is automatic since the conclusion is achieved without an analyst intervention. The stability test accounts for the nonconservative nature of the system with control devices such as the automatic voltage regulator (AVR) and automatic generation control (AGC) in contrast with the well-known direct methods. An energy function is derived for the system with machines forth-order model, AVR and AGC and it is used to start the analysis procedure and to point out criticalities. The conclusive analysis itself is made by means of a method based on the definition of a region surrounding the equilibrium point where the system net torque is equilibrium restorative. This region is named positive synchronization region (PSR). Since the definition of the PSR boundaries have no dependence on modelling approximation, the PSR test conduces to reliable results. (C) 2008 Elsevier Ltd. All rights reserved.