8 resultados para Economic instability

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.

The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.

In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.

The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With novel application of optical techniques, the slender-body hypervelocity boundary-layer instability is characterized in the previously unexplored regime where thermo-chemical effects are important. Narrowband disturbances (500-3000~kHz) are measured in boundary layers with edge velocities of up to 5~km/s at two points along the generator of a 5 degree half angle cone. Experimental amplification factor spectra are presented. Linear stability and PSE analysis is performed, with fair prediction of the frequency content of the disturbances; however, the analysis over-predicts the amplification of disturbances. The results of this work have two key implications: 1) the acoustic instability is present and may be studied in a large-scale hypervelocity reflected-shock tunnel, and 2) the new data set provides a new basis on which the instability can be studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The stability of a fluid having a non-uniform temperature stratification is examined analytically for the response of infinitesimal disturbances. The growth rates of disturbances have been established for a semi-infinite fluid for Rayleigh numbers of 103, 104, and 105 and for Prandtl numbers of 7.0 and 0.7.

The critical Rayleigh number for a semi-infinite fluid, based on the effective fluid depth, is found to be 32, while it is shown that for a finite fluid layer the critical Rayleigh number depends on the rate of heating. The minimum critical Rayleigh number, based on the depth of a fluid layer, is found to be 1340.

The stability of a finite fluid layer is examined for two special forms of heating. The first is constant flux heating, while in the second, the temperature of the lower surface is increased uniformly in time. In both cases, it is shown that for moderate rates of heating the critical Rayleigh number is reduced, over the value for very slow heating, while for very rapid heating the critical Rayleigh number is greatly increased. These results agree with published experimental observations.

The question of steady, non-cellular convection is given qualitative consideration. It is concluded that, although the motion may originate from infinitesimal disturbances during non-uniform heating, the final flow field is intrinsically non-linear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stars with a core mass greater than about 30 M become dynamically unstable due to electron-positron pair production when their central temperature reaches 1.5-2.0 x 109 0K. The collapse and subsequent explosion of stars with core masses of 45, 52, and 60 M is calculated. The range of the final velocity of expansion (3,400 – 8,500 km/sec) and of the mass ejected (1 – 40 M) is comparable to that observed for type II supernovae.

An implicit scheme of hydrodynamic difference equations (stable for large time steps) used for the calculation of the evolution is described.

For fast evolution the turbulence caused by convective instability does not produce the zero entropy gradient and perfect mixing found for slower evolution. A dynamical model of the convection is derived from the equations of motion and then incorporated into the difference equations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental and theoretical studies have been made of the electrothermal waves occurring in a nonequilibrium MHD plasma. These waves are caused by an instability that occurs when a plasma having a dependence of conductivity on current density is subjected to crossed electric and magnetic fields. Theoretically, these waves were studied by developing and solving the equations of a steady, one-dimensional nonuniformity in electron density. From these nonlinear equations, predictions of the maximum amplitude and of the half width of steady waves could be obtained. Experimentally, the waves were studied in a nonequilibrium discharge produced in a potassium-seeded argon plasma at 2000°K and 1 atm. pressure. The behavior of such a discharge with four different configurations of electrodes was determined from photographs, photomultiplier measurements, and voltage probes. These four configurations were chosen to produce steady waves, to check the stability of steady waves, and to observe the manifestation of the waves in a MHD generator or accelerator configuration.

Steady, one-dimensional waves were found to exist in a number of situations, and where they existed, their characteristics agreed with the predictions of the steady theory. Some extensions of this theory were necessary, however, to describe the transient phenomena occurring in the inlet region of a discharge transverse to the gas flow. It was also found that in a discharge away from the stabilizing effect of the electrodes, steady waves became unstable for large Hall parameters. Methods of prediction of the effective electrical conductivity and Hall parameter of a plasma with nonuniformities caused by the electrothermal waves were also studied. Using these methods and the values of amplitude predicted by the steady theory, it was found that the measured decrease in transverse conductivity of a MHD device, 50 per cent at a Hall parameter of 5, could be accounted for in terms of the electrothermal instability.