840 resultados para Economics, Mathematical
Resumo:
We study the language choice behavior of bilingual speakers in modern societies, such as the Basque Country, Ireland andWales. These countries have two o cial languages:A, spoken by all, and B, spoken by a minority. We think of the bilinguals in those societies as a population playing repeatedly a Bayesian game in which, they must choose strategically the language, A or B, that might be used in the interaction. The choice has to be made under imperfect information about the linguistic type of the interlocutors. We take the Nash equilibrium of the language use game as a model for real life language choice behavior. It is shown that the predictions made with this model t very well the data about the actual use, contained in the censuses, of Basque, Irish and Welsh languages. Then the question posed by Fishman (2001),which appears in the title, is answered as follows: it is hard, mainly, because bilingual speakers have reached an equilibrium which is evolutionary stable. This means that to solve fast and in a re ex manner their frequent language coordination problem, bilinguals have developed linguistic conventions based chie y on the strategy 'Use the same language as your interlocutor', which weakens the actual use of B.1
Resumo:
A mathematical model to optimize the German fishing fleet is draftet and it’s data basis is described. The model has been developed by Brodersen, Campbell and Hanf in 1994 to 1998. It could be shown, that this model is flexible enough to be applied successfully to a lot of very different political questions, if adapted accordingly. The economic consequences of measures of fishery politics, the effects of technical advances, but also increasing incertainties can, to some degree, appropriately be assessed quantitatively. Finally it could be shown that, principally, the available account of data is a good basis for investigations into fishery economics and fishery politics. However there is a need to treat the source of data continuously and competently in order to make these informations available quickly. Statistical data to reflect the fishery sector are valuable. However, they obtain their full value only when judged by experts from the fishing industry, biology and technical fishery research.
Resumo:
This paper deals with the economics of gasification facilities in general and IGCC power plants in particular. Regarding the prospects of these systems, passing the technological test is one thing, passing the economic test can be quite another. In this respect, traditional valuations assume constant input and/or output prices. Since this is hardly realistic, we allow for uncertainty in prices. We naturally look at the markets where many of the products involved are regularly traded. Futures markets on commodities are particularly useful for valuing uncertain future cash flows. Thus, revenues and variable costs can be assessed by means of sound financial concepts and actual market data. On the other hand, these complex systems provide a number of flexibility options (e.g., to choose among several inputs, outputs, modes of operation, etc.). Typically, flexibility contributes significantly to the overall value of real assets. Indeed, maximization of the asset value requires the optimal exercise of any flexibility option available. Yet the economic value of flexibility is elusive, the more so under (price) uncertainty. And the right choice of input fuels and/or output products is a main concern for the facility managers. As a particular application, we deal with the valuation of input flexibility. We follow the Real Options approach. In addition to economic variables, we also address technical and environmental issues such as energy efficiency, utility performance characteristics and emissions (note that carbon constraints are looming). Lastly, a brief introduction to some stochastic processes suitable for valuation purposes is provided.
Resumo:
How is climate change affecting our coastal environment? How can coastal communities adapt to sea level rise and increased storm risk? These questions have garnered tremendous interest from scientists and policy makers alike, as the dynamic coastal environment is particularly vulnerable to the impacts of climate change. Over half the world population lives and works in a coastal zone less than 120 miles wide, thereby being continuously affected by the changes in the coastal environment [6]. Housing markets are directly influenced by the physical processes that govern coastal systems. Beach towns like Oak Island in North Carolina (NC) face severe erosion, and the tax assesed value of one coastal property fell by 93% in 2007 [9]. With almost ninety percent of the sandy beaches in the US facing moderate to severe erosion [8], coastal communities often intervene to stabilize the shoreline and hold back the sea in order to protect coastal property and infrastructure. Beach nourishment, which is the process of rebuilding a beach by periodically replacing an eroding section of the beach with sand dredged from another location, is a policy for erosion control in many parts of the US Atlantic and Pacific coasts [3]. Beach nourishment projects in the United States are primarily federally funded and implemented by the Army Corps of Engineers (ACE) after a benefit-cost analysis. Benefits from beach nourishment include reduction in storm damage and recreational benefits from a wider beach. Costs would include the expected cost of construction, present value of periodic maintenance, and any external cost such as the environmental cost associated with a nourishment project (NOAA). Federal appropriations for nourishment totaled $787 million from 1995 to 2002 [10]. Human interventions to stabilize shorelines and physical coastal dynamics are strongly coupled. The value of the beach, in the form of storm protection and recreation amenities, is at least partly capitalized into property values. These beach values ultimately influence the benefit-cost analysis in support of shoreline stabilization policy, which, in turn, affects the shoreline dynamics. This paper explores the policy implications of this circularity. With a better understanding of the physical-economic feedbacks, policy makers can more effectively design climate change adaptation strategies. (PDF contains 4 pages)
Resumo:
Regulatory action to protect California’s coastal water quality from degradation by copper from recreational boats’ antifouling paints interacts with efforts to prevent transport of invasive, hull-fouling species. A copper regulatory program is in place for a major yacht basin in northern San Diego Bay and in process for other major, California boat basins. “Companion” fouling control strategies are used with copper-based antifouling paints, as some invasive species have developed resistance to the copper biocide. Such strategies are critical for boats with less toxic or nontoxic hull coatings. Boat traffic along over 3,000 miles of coastline in California and Baja California increases invasive species transport risks. For example, 80% of boats in Baja California marinas are from the United States, especially California. Policy makers, boating businesses and boat owners need information on costs and supply-side capacity for effective fouling control measures to co-manage water quality and invasive species concerns. (PDF contains 3 pages)
Resumo:
This thesis examines foundational questions in behavioral economics—also called psychology and economics—and the neural foundations of varied sources of utility. We have three primary aims: First, to provide the field of behavioral economics with psychological theories of behavior that are derived from neuroscience and to use those theories to identify novel evidence for behavioral biases. Second, we provide neural and micro foundations of behavioral preferences that give rise to well-documented empirical phenomena in behavioral economics. Finally, we show how a deep understanding of the neural foundations of these behavioral preferences can feed back into our theories of social preferences and reference-dependent utility.
The first chapter focuses on classical conditioning and its application in identifying the psychological underpinnings of a pricing phenomenon. We return to classical conditioning again in the third chapter where we use fMRI to identify varied sources of utility—here, reference dependent versus direct utility—and cross-validate our interpretation with a conditioning experiment. The second chapter engages social preferences and, more broadly, causative utility (wherein the decision-maker derives utility from making or avoiding particular choices).
Resumo:
The problem of the finite-amplitude folding of an isolated, linearly viscous layer under compression and imbedded in a medium of lower viscosity is treated theoretically by using a variational method to derive finite difference equations which are solved on a digital computer. The problem depends on a single physical parameter, the ratio of the fold wavelength, L, to the "dominant wavelength" of the infinitesimal-amplitude treatment, L_d. Therefore, the natural range of physical parameters is covered by the computation of three folds, with L/L_d = 0, 1, and 4.6, up to a maximum dip of 90°.
Significant differences in fold shape are found among the three folds; folds with higher L/L_d have sharper crests. Folds with L/L_d = 0 and L/L_d = 1 become fan folds at high amplitude. A description of the shape in terms of a harmonic analysis of inclination as a function of arc length shows this systematic variation with L/L_d and is relatively insensitive to the initial shape of the layer. This method of shape description is proposed as a convenient way of measuring the shape of natural folds.
The infinitesimal-amplitude treatment does not predict fold-shape development satisfactorily beyond a limb-dip of 5°. A proposed extension of the treatment continues the wavelength-selection mechanism of the infinitesimal treatment up to a limb-dip of 15°; after this stage the wavelength-selection mechanism no longer operates and fold shape is mainly determined by L/L_d and limb-dip.
Strain-rates and finite strains in the medium are calculated f or all stages of the L/L_d = 1 and L/L_d = 4.6 folds. At limb-dips greater than 45° the planes of maximum flattening and maximum flattening rat e show the characteristic orientation and fanning of axial-plane cleavage.
Resumo:
One of the major problems in the mass production of sugpo is how to obtain a constant supply of fry. Since ultimately it is the private sector which should produce the sugpo fry to fill the needs of the industry, the Barangay Hatchery Project under the Prawn Program of the Aquaculture Department of SEAFDEC has scaled down the hatchery technology from large tanks to a level which can be adopted by the private sector, especially in the villages, with a minimum of financial and technical inputs. This guide to small-scale hatchery operations is expected to generate more enthusiasm among fish farmers interested in venturing into sugpo culture.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
Roughly one half of World's languages are in danger of extinction. The endangered languages, spoken by minorities, typically compete with powerful languages such as En- glish or Spanish. Consequently, the speakers of minority languages have to consider that not everybody can speak their language, converting the language choice into strategic,coordination-like situation. We show experimentally that the displacement of minority languages may be partially explained by the imperfect information about the linguistic type of the partner, leading to frequent failure to coordinate on the minority language even between two speakers who can and prefer to use it. The extent of miscoordination correlates with how minoritarian a language is and with the real-life linguistic condition of subjects: the more endangered a language the harder it is to coordinate on its use, and people on whom the language survival relies the most acquire behavioral strategies that lower its use. Our game-theoretical treatment of the issue provides a new perspective for linguistic policies.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
16 p.
Resumo:
27 p.
Resumo:
21 p.