977 resultados para monetary value
Resumo:
We consider the following singularly perturbed linear two-point boundary-value problem:
Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)
By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)
Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.
A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.
Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).
Resumo:
4 p.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
25 p.
Resumo:
This study examined the sea cucumber industry in the Philippines through the value chain lens. The intent was to identify effective pathways for the successful introduction of sandfish culture as livelihood support for coastal communities. Value chain analysis is a high-resolution analytical tool that enables industry examination at a detailed level. Previous industry assessments have provided a general picture of the sea cucumber industry in the country. The present study builds on the earlier work and supplies additional details for a better understanding of the industry's status and problems, especially their implications for the Australian Center for International Agricultural Research (ACIAR) funded sandfish project "Culture of sandfish (Holothuria scabra) in Asia- Pacific" (FIS/2003/059). (PDF contains 54 pages)
Resumo:
The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?
In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
This investigation deals with certain generalizations of the classical uniqueness theorem for the second boundary-initial value problem in the linearized dynamical theory of not necessarily homogeneous nor isotropic elastic solids. First, the regularity assumptions underlying the foregoing theorem are relaxed by admitting stress fields with suitably restricted finite jump discontinuities. Such singularities are familiar from known solutions to dynamical elasticity problems involving discontinuous surface tractions or non-matching boundary and initial conditions. The proof of the appropriate uniqueness theorem given here rests on a generalization of the usual energy identity to the class of singular elastodynamic fields under consideration.
Following this extension of the conventional uniqueness theorem, we turn to a further relaxation of the customary smoothness hypotheses and allow the displacement field to be differentiable merely in a generalized sense, thereby admitting stress fields with square-integrable unbounded local singularities, such as those encountered in the presence of focusing of elastic waves. A statement of the traction problem applicable in these pathological circumstances necessitates the introduction of "weak solutions'' to the field equations that are accompanied by correspondingly weakened boundary and initial conditions. A uniqueness theorem pertaining to this weak formulation is then proved through an adaptation of an argument used by O. Ladyzhenskaya in connection with the first boundary-initial value problem for a second-order hyperbolic equation in a single dependent variable. Moreover, the second uniqueness theorem thus obtained contains, as a special case, a slight modification of the previously established uniqueness theorem covering solutions that exhibit only finite stress-discontinuities.
Resumo:
There is a widespread recognition of the need for better information sharing and provision to improve the viability of end-of-life (EOL) product recovery operations. The emergence of automated data capture and sharing technologies such as RFID, sensors and networked databases has enhanced the ability to make product information; available to recoverers, which will help them make better decisions regarding the choice of recovery option for EOL products. However, these technologies come with a cost attached to it, and hence the question 'what is its value?' is critical. This paper presents a probabilistic approach to model product recovery decisions and extends the concept of Bayes' factor for quantifying the impact of product information on the effectiveness of these decisions. Further, we provide a quantitative examination of the factors that influence the value of product information, this value depends on three factors: (i) penalties for Type I and Type II errors of judgement regarding product quality; (ii) prevalent uncertainty regarding product quality and (iii) the strength of the information to support/contradict the belief. Furthermore, we show that information is not valuable under all circumstances and derive conditions for achieving a positive value of information. © 2010 Taylor & Francis.
Resumo:
La salud es un aspecto muy importante en la vida de cualquier persona, de forma que, al ocurrir cualquier contingencia que merma el estado de salud de un individuo o grupo de personas, se debe valorar estrictamente y en detalle las distintas alternativas destinadas a combatir la enfermedad. Esto se debe a que, la calidad de vida de los pacientes variará dependiendo de la alternativa elegida. La calidad de vida relacionada con la salud (CVRS) se entiende como el valor asignado a la duración de la vida, modificado por la oportunidad social, la percepción, el estado funcional y la disminución provocadas por una enfermedad, accidente, tratamiento o política (Sacristán et al, 1995). Para determinar el valor numérico asignado a la CVRS, ante una intervención, debemos beber de la teoría económica aplicada a las evaluaciones sanitarias para nuevas intervenciones. Entre los métodos de evaluación económica sanitaria, el método coste-utilidad emplea como utilidad, los años de vida ajustado por calidad (AVAC), que consiste, por un lado, tener en cuenta la calidad de vida ante una intervención médica, y por otro lado, los años estimados a vivir tras la intervención. Para determinar la calidad de vida, se emplea técnicas como el Juego Estándar, la Equivalencia Temporal y la Escala de Categoría. Estas técnicas nos proporcionan un valor numérico entre 0 y 1, siendo 0 el peor estado y 1 el estado perfecto de salud. Al entrevistar a un paciente a cerca de la utilidad en términos de salud, puede haber riesgo o incertidumbre en la pregunta planteada. En tal caso, se aplica el Juego Estándar con el fin de determinar el valor numérico de la utilidad o calidad de vida del paciente ante un tratamiento dado. Para obtener este valor, al paciente se le plantean dos escenarios: en primer lugar, un estado de salud con probabilidad de morir y de sobrevivir, y en segundo lugar, un estado de certeza. La utilidad se determina modificando la probabilidad de morir hasta llegar a la probabilidad que muestra la indiferencia del individuo entre el estado de riesgo y el estado de certeza. De forma similar, tenemos la equivalencia temporal, cuya aplicación resulta más fácil que el juego estándar ya que valora en un eje de ordenadas y abscisas, el valor de la salud y el tiempo a cumplir en esa situación ante un tratamiento sanitario, de forma que, se llega al valor correspondiente a la calidad de vida variando el tiempo hasta que el individuo se muestre indiferente entre las dos alternativas. En último lugar, si lo que se espera del paciente es una lista de estados de salud preferidos ante un tratamiento, empleamos la Escala de Categoría, que consiste en una línea horizontal de 10 centímetros con puntuaciones desde 0 a 100. La persona entrevistada coloca la lista de estados de salud según el orden de preferencia en la escala que después es normalizado a un intervalo entre 0 y 1. Los años de vida ajustado por calidad se obtienen multiplicando el valor de la calidad de vida por los años de vida estimados que vivirá el paciente. Sin embargo, ninguno de estas metodologías mencionadas consideran el factor edad, siendo necesario la inclusión de esta variable. Además, los pacientes pueden responder de manera subjetiva, situación en la que se requiere la opinión de un experto que determine el nivel de discapacidad del aquejado. De esta forma, se introduce el concepto de años de vida ajustado por discapacidad (AVAD) tal que el parámetro de utilidad de los AVAC será el complementario del parámetro de discapacidad de los AVAD Q^i=1-D^i. A pesar de que este último incorpora parámetros de ponderación de edad que no se contemplan en los AVAC. Además, bajo la suposición Q=1-D, podemos determinar la calidad de vida del individuo antes del tratamiento. Una vez obtenido los AVAC ganados, procedemos a la valoración monetaria de éstos. Para ello, partimos de la suposición de que la intervención sanitaria permite al individuo volver a realizar las labores que venía realizando. De modo que valoramos los salarios probables con una temporalidad igual a los AVAC ganados, teniendo en cuenta la limitación que supone la aplicación de este enfoque. Finalmente, analizamos los beneficios derivados del tratamiento (masa salarial probable) si empleamos la tabla GRF-95 (población femenina) y GRM-95 (población masculina).
Resumo:
This academic work is based on the study of the gold standard, its evolution over the years, their periods of boom and crisis. We will also discuss the arguments that some economists back the return to this monetary system.
Resumo:
The value chain analysis of ths report focused on smoked marine fish- overwhelmingly the most important fish product originating in Western Region, Ghana. Smoked fish from Western Region is mainly destined for the domestic market where demand is very strong. Small quantities of smoked fish are destined for markets in Togo, Benin and Nigeria. The underlying objective of the fisheries value chain analysis is to identify opportunities for growth in the fisheries value chain, with an emphasis on those opportunities that have the potential to generate significant additional livelihoods, particularly at the level of the fishing communities and for low-income groups. The results from the value chain analysis will be used to identify pilot interventions to promote those livelihood outcomes. The main focus for the study is smoked fish (major species/product forms) destined for domestic markets. However, work will also be undertaken on the fresh fish trade and frozen fish to find out more about the significance of these value chains.