992 resultados para Computational economics
Resumo:
The first chapter analizes conditional assistance programs. They generate conflicting relationships between international financial institutions (IFIs) and member countries. The experience of IFIs with conditionality in the 1990s led them to allow countries more latitude in the design of their reform programs. A reformist government does not need conditionality and it is useless if it does not want to reform. A government that faces opposition may use conditionality and the help of pro-reform lobbies as a lever to counteract anti-reform groups and succeed in implementing reforms. The second chapter analizes economies saddled with taxes and regulations. I consider an economy in which many taxes, subsidies, and other distortionary restrictions are in place simultaneously. If I start from an inefficient laissez-faire equilibrium because of some domestic distortion, a small trade tax or subsidy can yield a first-order welfare improvement, even if the instrument itself creates distortions of its own. This may result in "welfare paradoxes". The purpose of the chapter is to quantify the welfare effects of changes in tax rates in a small open economy. I conduct the simulation in the context of an intertemporal utility maximization framework. I apply numerical methods to the model developed by Karayalcin. I introduce changes in the tax rates and quantify both the impact on welfare, consumption and foreign assets, and the path to the new steady-state values. The third chapter studies the role of stock markets and adjustment costs in the international transmission of supply shocks. The analysis of the transmission of a positive supply shock that originates in one of the countries shows that on impact the shock leads to an inmediate stock market boom enjoying the technological advance, while the other country suffers from depress stock market prices as demand for its equity declines. A period of adjustment begins culminating in a steady state capital and output level that is identical to the one before the shock. The the capital stock of one country undergoes a non-monotonic adjustment. The model is tested with plausible values of the variables and the numeric results confirm the predictions of the theory.
Resumo:
Scoliosis is a spinal deformity that requires surgical correction in progressive cases. In order to optimize surgical outcomes, patient-specific finite element models are being developed by our group. In this paper, a single rod anterior correction procedure is simulated for a group of six scoliosis patients. For each patient, personalised model geometry was derived from low-dose CT scans, and clinically measured intra-operative corrective forces were applied. However, tissue material properties were not patient-specific, being derived from existing literature. Clinically, the patient group had a mean initial Cobb angle of 47.3 degrees, which was corrected to 17.5 degrees after surgery. The mean simulated post-operative Cobb angle for the group was 18.1 degrees. Although this represents good agreement between clinical and simulated corrections, the discrepancy between clinical and simulated Cobb angle for individual patients varied between -10.3 and +8.6 degrees, with only three of the six patients matching the clinical result to within accepted Cobb measurement error of +-5 degrees. The results of this study suggest that spinal tissue material properties play an important role in governing the correction obtained during surgery, and that patient-specific modelling approaches must address the question of how to prescribe patient-specific soft tissue properties for spine surgery simulation.
Resumo:
In recent times computational algorithms inspired by biological processes and evolution are gaining much popularity for solving science and engineering problems. These algorithms are broadly classified into evolutionary computation and swarm intelligence algorithms, which are derived based on the analogy of natural evolution and biological activities. These include genetic algorithms, genetic programming, differential evolution, particle swarm optimization, ant colony optimization, artificial neural networks, etc. The algorithms being random-search techniques, use some heuristics to guide the search towards optimal solution and speed-up the convergence to obtain the global optimal solutions. The bio-inspired methods have several attractive features and advantages compared to conventional optimization solvers. They also facilitate the advantage of simulation and optimization environment simultaneously to solve hard-to-define (in simple expressions), real-world problems. These biologically inspired methods have provided novel ways of problem-solving for practical problems in traffic routing, networking, games, industry, robotics, economics, mechanical, chemical, electrical, civil, water resources and others fields. This article discusses the key features and development of bio-inspired computational algorithms, and their scope for application in science and engineering fields.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
This chapter presents fuzzy cognitive maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. The corresponding Web KnowARR framework incorporates findings from fuzzy logic. To this end, a first emphasis is particularly on the Web KnowARR framework along with a stakeholder management use case to illustrate the framework’s usefulness as a second focal point. This management form is to help projects to acceptance and assertiveness where claims for company decisions are actively involved in the management process. Stakeholder maps visually (re-) present these claims. On one hand, they resort to non-public content and on the other they resort to content that is available to the public (mostly on the Web). The Semantic Web offers opportunities not only to present public content descriptively but also to show relationships. The proposed framework can serve as the basis for the public content of stakeholder maps.
Resumo:
The study investigates the role of credit risk in a continuous time stochastic asset allocation model, since the traditional dynamic framework does not provide credit risk flexibility. The general model of the study extends the traditional dynamic efficiency framework by explicitly deriving the optimal value function for the infinite horizon stochastic control problem via a weighted volatility measure of market and credit risk. The model's optimal strategy was then compared to that obtained from a benchmark Markowitz-type dynamic optimization framework to determine which specification adequately reflects the optimal terminal investment returns and strategy under credit and market risks. The paper shows that an investor's optimal terminal return is lower than typically indicated under the traditional mean-variance framework during periods of elevated credit risk. Hence I conclude that, while the traditional dynamic mean-variance approach may indicate the ideal, in the presence of credit-risk it does not accurately reflect the observed optimal returns, terminal wealth and portfolio selection strategies.
Resumo:
The Geographical Simulation Model developed by IDE-JETRO (IDE-GSM) is a computer simulation model based on spatial economics. IDE-GSM enables us to predict the economic impacts of various trade and transport facilitation measures. Here, we mainly compare the prioritized projects of the Master Plan on ASEAN Connectivity (MPAC) and the Comprehensive Asia Development Plan (CADP). MPAC focus on specific hard or soft infrastructure projects that connect one ASEAN member state to another while the CADP emphasizes the importance of economic corridors or linkages between a large cluster and another cluster. As compared with MPAC projects, the simulation analysis shows that CADP projects have much larger positive impacts on ASEAN countries.