11 resultados para economic constraints
em CaltechTHESIS
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.
Resumo:
In this thesis, we test the electroweak sector of the Standard Model of particle physics through the measurements of the cross section of the simultaneous production of the neutral weak boson Z and photon γ, and the limits on the anomalous Zγγ and ZZγ triple gauge couplings h3 and h4 with the Z decaying to leptons (electrons and muons). We analyze events collected in proton-proton collisions at center of mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 5.0 inverse femtobarn. The analyzed events were recorded by the Compact Muon Solenoid detector at the Large Hadron Collider in 2011.
The production cross section has been measured for hard photons with transverse momentum greater than 15 GeV that are separated from the the final state leptons in the eta-phi plane by Delta R greater than 0.7, whose sum of the transverse energy of hadrons over the transverse energy of the photon in a cone around the photon with Delta R less than 0.3 is less than 0.5, and with the invariant mass of the dilepton system greater than 50 GeV. The measured cross section value is 5.33 +/- 0.08 (stat.) +/- 0.25 (syst.) +/- 0.12 (lumi.) picobarn. This is compatible with the Standard Model prediction that includes next-to-leading-order QCD contributions: 5.45 +/- 0.27 picobarn.
The measured 95 % confidence-level upper limits on the absolute values of the anomalous couplings h3 and h4 are 0.01 and 8.8E-5 for the Zγγ interactions, and, 8.6E-3 and 8.0E-5 for the ZZγ interactions. These values are also compatible with the Standard Model where they vanish in the tree-level approximation. They extend the sensitivity of the 2012 results from the ATLAS collaboration based on 1.02 inverse femtobarn of data by a factor of 2.4 to 3.1.
Resumo:
Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.
Resumo:
One of the greatest challenges in science lies in disentangling causality in complex, coupled systems. This is illustrated no better than in the dynamic interplay between the Earth and life. The early evolution and diversification of animals occurred within a backdrop of global change, yet reconstructing the potential role of the environment in this evolutionary transition is challenging. In the 200 million years from the end-Cryogenian to the Ordovician, enigmatic Ediacaran fauna explored body plans, animals diversified and began to biomineralize, forever changing the ocean's chemical cycles, and the biological community in shallow marine ecosystems transitioned from a microbial one to an animal one.
In the following dissertation, a multi-faceted approach combining macro- and micro-scale analyses is presented that draws on the sedimentology, geochemistry and paleontology of the rocks that span this transition to better constrain the potential environmental changes during this interval.
In Chapter 1, the potential of clumped isotope thermometry in deep time is explored by assessing the importance of burial and diagenesis on the thermometer. Eocene- to Precambrian-aged carbonates from the Sultanate of Oman were analyzed from current burial depths of 350-5850 meters. Two end-member styles of diagenesis independent of burial depth were observed.
Chapters 2, 3 and 4 explore the fallibility of the Ediacaran carbon isotope record and aspects of the sedimentology and geochemistry of the rocks preserving the largest negative carbon isotope excursion on record---the Shuram Excursion. Chapter 2 documents the importance of temperature, fluid composition and mineralogy on the delta 18-O min record and interrogates the bulk trace metal signal. Chapter 3 explores the spatial variability in delta 13-C recorded in the transgressive Johnnie Oolite and finds a north-to-south trend recording the onset of the excursion. Chapter 4 investigates the nature of seafloor precipitation during this excursion and more broadly. We document the potential importance of microbial respiratory reactions on the carbonate chemistry of the sediment-water interface through time.
Chapter 5 investigates the latest Precambrian sedimentary record in carbonates from the Sultanate of Oman, including how delta 13-C and delta 34-S CAS vary across depositional and depth gradients. A new model for the correlation of the Buah and Ara formations across Oman is presented. Isotopic results indicate delta 13-C varies with relative eustatic change and delta 34-S CAS may vary in absolute magnitude across Oman.
Chapter 6 investigates the secular rise in delta 18-Omin in the early Paleozoic by using clumped isotope geochemistry on calcitic and phosphatic fossils from the Cambrian and Ordovician. Results do not indicate extreme delta 18-O seawater depletion and instead suggest warmer equatorial temperatures across the early Paleozoic.
Resumo:
This thesis brings together four papers on optimal resource allocation under uncertainty with capacity constraints. The first is an extension of the Arrow-Debreu contingent claim model to a good subject to supply uncertainty for which delivery capacity has to be chosen before the uncertainty is resolved. The second compares an ex-ante contingent claims market to a dynamic market in which capacity is chosen ex-ante and output and consumption decisions are made ex-post. The third extends the analysis to a storable good subject to random supply. Finally, the fourth examines optimal allocation of water under an appropriative rights system.
Resumo:
Despite years of research on low-angle detachments, much about them remains enigmatic. This thesis addresses some of the uncertainty regarding two particular detachments, the Mormon Peak detachment in Nevada and the Heart Mountain detachment in Wyoming and Montana.
Constraints on the geometry and kinematics of emplacement of the Mormon Peak detachment are provided by detailed geologic mapping of the Meadow Valley Mountains, along with an analysis of structural data within the allochthon in the Mormon Mountains. Identifiable structures well suited to constrain the kinematics of the detachment include a newly mapped, Sevier-age monoclinal flexure in the hanging wall of the detachment. This flexure, including the syncline at its base and the anticline at its top, can be readily matched to the base and top of the frontal Sevier thrust ramp, which is exposed in the footwall of the detachment to the east in the Mormon Mountains and Tule Springs Hills. The ~12 km of offset of these structural markers precludes the radial sliding hypothesis for emplacement of the allochthon.
The role of fluids in the slip along faults is a widely investigated topic, but the use of carbonate clumped-isotope thermometry to investigate these fluids is new. Faults rocks from within ~1 m of the Mormon Peak detachment, including veins, breccias, gouges, and host rocks, were analyzed for carbon, oxygen, and clumped-isotope measurements. The data indicate that much of the carbonate breccia and gouge material along the detachment is comminuted host rock, as expected. Measurements in vein material indicate that the fluid system is dominated by meteoric water, whose temperature indicates circulation to substantial depths (c. 4 km) in the upper crust near the fault zone.
Slip along the subhorizontal Heart Mountain detachment is particularly enigmatic, and many different mechanisms for failure have been proposed, predominantly involving catastrophic failure. Textural evidence of multiple slip events is abundant, and include multiple brecciation events and cross-cutting clastic dikes. Footwall deformation is observed in numerous exposures of the detachment. Stylolitic surfaces and alteration textures within and around “banded grains” previously interpreted to be an indicator of high-temperature fluidization along the fault suggest their formation instead via low-temperature dissolution and alteration processes. There is abundant textural evidence of the significant role of fluids along the detachment via pressure solution. The process of pressure solution creep may be responsible for enabling multiple slip events on the low-angle detachment, via a local rotation of the stress field.
Clumped-isotope thermometry of fault rocks associated with the Heart Mountain detachment indicates that despite its location on the flanks of a volcano that was active during slip, the majority of carbonate along the Heart Mountain detachment does not record significant heating above ambient temperatures (c. 40-70°C). Instead, cold meteoric fluids infiltrated the detachment breccia, and carbonate precipitated under ambient temperatures controlled by structural depth. Locally, fault gouge does preserve hot temperatures (>200°C), as is observed in both the Mormon Peak detachment and Heart Mountain detachment areas. Samples with very hot temperatures attributable to frictional shear heating are present but rare. They appear to be best preserved in hanging wall structures related to the detachment, rather than along the main detachment.
Evidence is presented for the prevalence of relatively cold, meteoric fluids along both shallow crustal detachments studied, and for protracted histories of slip along both detachments. Frictional heating is evident from both areas, but is a minor component of the preserved fault rock record. Pressure solution is evident, and might play a role in initiating slip on the Heart Mountain fault, and possibly other low-angle detachments.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
No abstract.