8 resultados para [JEL:Z12] Other Special Topics - Cultural Economics

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The various singularities and instabilities which arise in the modulation theory of dispersive wavetrains are studied. Primary interest is in the theory of nonlinear waves, but a study of associated questions in linear theory provides background information and is of independent interest.

The full modulation theory is developed in general terms. In the first approximation for slow modulations, the modulation equations are solved. In both the linear and nonlinear theories, singularities and regions of multivalued modulations are predicted. Higher order effects are considered to evaluate this first order theory. An improved approximation is presented which gives the true behavior in the singular regions. For the linear case, the end result can be interpreted as the overlap of elementary wavetrains. In the nonlinear case, it is found that a sufficiently strong nonlinearity prevents this overlap. Transition zones with a predictable structure replace the singular regions.

For linear problems, exact solutions are found by Fourier integrals and other superposition techniques. These show the true behavior when breaking modulations are predicted.

A numerical study is made for the anharmonic lattice to assess the nonlinear theory. This confirms the theoretical predictions of nonlinear group velocities, group splitting, and wavetrain instability, as well as higher order effects in the singular regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.

In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.

We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are morelocal”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.

As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.

Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I. Trimesic acid (1, 3, 5-benzenetricarboxylic acid) crystallizes with a monoclinic unit cell of dimensions a = 26.52 A, b = 16.42 A, c = 26.55 A, and β = 91.53° with 48 molecules /unit cell. Extinctions indicated a space group of Cc or C2/c; a satisfactory structure was obtained in the latter with 6 molecules/asymmetric unit - C54O36H36 with a formula weight of 1261 g. Of approximately 12,000 independent reflections within the CuKα sphere, intensities of 11,563 were recorded visually from equi-inclination Weissenberg photographs.

The structure was solved by packing considerations aided by molecular transforms and two- and three-dimensional Patterson functions. Hydrogen positions were found on difference maps. A total of 978 parameters were refined by least squares; these included hydrogen parameters and anisotropic temperature factors for the C and O atoms. The final R factor was 0.0675; the final "goodness of fit" was 1.49. All calculations were carried out on the Caltech IBM 7040-7094 computer using the CRYRM Crystallographic Computing System.

The six independent molecules fall into two groups of three nearly parallel molecules. All molecules are connected by carboxylto- carboxyl hydrogen bond pairs to form a continuous array of sixmolecule rings with a chicken-wire appearance. These arrays bend to assume two orientations, forming pleated sheets. Arrays in different orientations interpenetrate - three molecules in one orientation passing through the holes of three parallel arrays in the alternate orientation - to produce a completely interlocking network. One third of the carboxyl hydrogen atoms were found to be disordered.

II. Optical transforms as related to x-ray diffraction patterns are discussed with reference to the theory of Fraunhofer diffraction.

The use of a systems approach in crystallographic computing is discussed with special emphasis on the way in which this has been done at the California Institute of Technology.

An efficient manner of calculating Fourier and Patterson maps on a digital computer is presented. Expressions for the calculation of to-scale maps for standard sections and for general-plane sections are developed; space-group-specific expressions in a form suitable for computers are given for all space groups except the hexagonal ones.

Expressions for the calculation of settings for an Eulerian-cradle diffractometer are developed for both the general triclinic case and the orthogonal case.

Photographic materials on pp. 4, 6, 10, and 20 are essential and will not reproduce clearly on Xerox copies. Photographic copies should be ordered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As borne out by everyday social experience, social cognition is highly dependent on context, modulated by a host of factors that arise from the social environment in which we live. While streamlined laboratory research provides excellent experimental control, it can be limited to telling us about the capabilities of the brain under artificial conditions, rather than elucidating the processes that come into play in the real world. Consideration of the impact of ecologically valid contextual cues on social cognition will improve the generalizability of social neuroscience findings also to pathology, e.g., to psychiatric illnesses. To help bridge between laboratory research and social cognition as we experience it in the real world, this thesis investigates three themes: (1) increasing the naturalness of stimuli with richer contextual cues, (2) the potentially special contextual case of social cognition when two people interact directly, and (3) a third theme of experimental believability, which runs in parallel to the first two themes. Focusing on the first two themes, in work with two patient populations, we explore neural contributions to two topics in social cognition. First, we document a basic approach bias in rare patients with bilateral lesions of the amygdala. This finding is then related to the contextual factor of ambiguity, and further investigated together with other contextual cues in a sample of healthy individuals tested over the internet, finally yielding a hierarchical decision tree for social threat evaluation. Second, we demonstrate that neural processing of eye gaze in brain structures related to face, gaze, and social processing is differently modulated by the direct presence of another live person. This question is investigated using fMRI in people with autism and controls. Across a range of topics, we demonstrate that two themes of ecological validity — integration of naturalistic contextual cues, and social interaction — influence social cognition, that particular brain structures mediate this processing, and that it will be crucial to study interaction in order to understand disorders of social interaction such as autism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time, risk, and attention are all integral to economic decision making. The aim of this work is to understand those key components of decision making using a variety of approaches: providing axiomatic characterizations to investigate time discounting, generating measures of visual attention to infer consumers' intentions, and examining data from unique field settings.

Chapter 2, co-authored with Federico Echenique and Kota Saito, presents the first revealed-preference characterizations of exponentially-discounted utility model and its generalizations. My characterizations provide non-parametric revealed-preference tests. I apply the tests to data from a recent experiment, and find that the axiomatization delivers new insights on a dataset that had been analyzed by traditional parametric methods.

Chapter 3, co-authored with Min Jeong Kang and Colin Camerer, investigates whether "pre-choice" measures of visual attention improve in prediction of consumers' purchase intentions. We measure participants' visual attention using eyetracking or mousetracking while they make hypothetical as well as real purchase decisions. I find that different patterns of visual attention are associated with hypothetical and real decisions. I then demonstrate that including information on visual attention improves prediction of purchase decisions when attention is measured with mousetracking.

Chapter 4 investigates individuals' attitudes towards risk in a high-stakes environment using data from a TV game show, Jeopardy!. I first quantify players' subjective beliefs about answering questions correctly. Using those beliefs in estimation, I find that the representative player is risk averse. I then find that trailing players tend to wager more than "folk" strategies that are known among the community of contestants and fans, and this tendency is related to their confidence. I also find gender differences: male players take more risk than female players, and even more so when they are competing against two other male players.

Chapter 5, co-authored with Colin Camerer, investigates the dynamics of the favorite-longshot bias (FLB) using data on horse race betting from an online exchange that allows bettors to trade "in-play." I find that probabilistic forecasts implied by market prices before start of the races are well-calibrated, but the degree of FLB increases significantly as the events approach toward the end.