82 resultados para Dynamic photorefractive volume grating
Resumo:
Researchers have used stylized facts on asset prices and trading volumein stock markets (in particular, the mean reversion of asset returnsand the correlations between trading volume, price changes and pricelevels) to support theories where agents are not rational expected utilitymaximizers. This paper shows that this empirical evidence is in factconsistent with a standard infite horizon perfect information expectedutility economy where some agents face leverage constraints similar tothose found in todays financial markets. In addition, and in sharpcontrast to the theories above, we explain some qualitative differencesthat are observed in the price-volume relation on stock and on futuresmarkets. We consider a continuous-time economy where agents maximize theintegral of their discounted utility from consumption under both budgetand leverage con-straints. Building on the work by Vila and Zariphopoulou(1997), we find a closed form solution, up to a negative constant, for theequilibrium prices and demands in the region of the state space where theconstraint is non-binding. We show that, at the equilibrium, stock holdingsvolatility as well as its ratio to stock price volatility are increasingfunctions of the stock price and interpret this finding in terms of theprice-volume relation.
Resumo:
Many revenue management (RM) industries are characterized by (a) fixed capacities in theshort term (e.g., hotel rooms, seats on an airline flight), (b) homogeneous products (e.g., twoairline flights between the same cities at similar times), and (c) customer purchasing decisionslargely influenced by price. Competition in these industries is also very high even with just twoor three direct competitors in a market. However, RM competition is not well understood andpractically all known implementations of RM software and most published models of RM donot explicitly model competition. For this reason, there has been considerable recent interestand research activity to understand RM competition. In this paper we study price competitionfor an oligopoly in a dynamic setting, where each of the sellers has a fixed number of unitsavailable for sale over a fixed number of periods. Demand is stochastic, and depending on howit evolves, sellers may change their prices at any time. This reflects the fact that firms constantly,and almost costlessly, change their prices (alternately, allocations at a price in quantity-basedRM), reacting either to updates in their estimates of market demand, competitor prices, orinventory levels. We first prove existence of a unique subgame-perfect equilibrium for a duopoly.In equilibrium, in each state sellers engage in Bertrand competition, so that the seller withthe lowest reservation value ends up selling a unit at a price that is equal to the equilibriumreservation value of the competitor. This structure hence extends the marginal-value conceptof bid-price control, used in many RM implementations, to a competitive model. In addition,we show that the seller with the lowest capacity sells all its units first. Furthermore, we extendthe results transparently to n firms and perform a number of numerical comparative staticsexploiting the uniqueness of the subgame-perfect equilibrium.
Resumo:
Perceptual maps have been used for decades by market researchers to illuminatethem about the similarity between brands in terms of a set of attributes, to position consumersrelative to brands in terms of their preferences, or to study how demographic and psychometricvariables relate to consumer choice. Invariably these maps are two-dimensional and static. Aswe enter the era of electronic publishing, the possibilities for dynamic graphics are opening up.We demonstrate the usefulness of introducing motion into perceptual maps through fourexamples. The first example shows how a perceptual map can be viewed in three dimensions,and the second one moves between two analyses of the data that were collected according todifferent protocols. In a third example we move from the best view of the data at the individuallevel to one which focuses on between-group differences in aggregated data. A final exampleconsiders the case when several demographic variables or market segments are available foreach respondent, showing an animation with increasingly detailed demographic comparisons.These examples of dynamic maps use several data sets from marketing and social scienceresearch.
Resumo:
This paper presents a dynamic choice model in the attributespace considering rational consumers that discount the future. In lightof the evidence of several state-dependence patterns, the model isfurther extended by considering a utility function that allows for thedifferent types of behavior described in the literature: pure inertia,pure variety seeking and hybrid. The model presents a stationaryconsumption pattern that can be inertial, where the consumer only buysone product, or a variety-seeking one, where the consumer buys severalproducts simultane-ously. Under the inverted-U marginal utilityassumption, the consumer behaves inertial among the existing brands forseveral periods, and eventually, once the stationary levels areapproached, the consumer turns to a variety-seeking behavior. An empiricalanalysis is run using a scanner database for fabric softener andsignificant evidence of hybrid behavior for most attributes is found,which supports the functional form considered in the theory.
Resumo:
This paper argues that the strategic use of debt favours the revelationof information in dynamic adverse selection problems. Our argument is basedon the idea that debt is a credible commitment to end long term relationships.Consequently, debt encourages a privately informed party to disclose itsinformation at early stages of a relationship. We illustrate our pointwith the financing decision of a monopolist selling a good to a buyerwhose valuation is private information. A high level of (renegotiable)debt, by increasing the scope for liquidation, may induce the highvaluation buyer to buy early at a high price and thus increase themonopolist's expected payoff. By affecting the buyer's strategy, it mayreduce the probability of excessive liquidation. We investigate theconsequences of good durability and we examine the way debt mayalleviate the ratchet effect.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
We study the interaction between insurance and capital markets within singlebut general framework.We show that capital markets greatly enhance the risksharing capacity of insurance markets and the scope of risks that areinsurable because efficiency does not depend on the number of agents atrisk, nor on risks being independent, nor on the preferences and endowmentsof agents at risk being the same. We show that agents share risks by buyingfull coverage for their individual risks and provide insurance capitalthrough stock markets.We show that aggregate risk enters private insuranceas positive loading on insurance prices and despite that agents will buyfull coverage. The loading is determined by the risk premium of investorsin the stock market and hence does not depend on the agent s willingnessto pay. Agents provide insurance capital by trading an equally weightedportfolio of insurance company shares and riskless asset. We are able toconstruct agents optimal trading strategies explicitly and for verygeneral preferences.
Resumo:
In models where privately informed agents interact, agents may need to formhigher order expectations, i.e. expectations of other agents' expectations. This paper develops a tractable framework for solving and analyzing linear dynamic rational expectationsmodels in which privately informed agents form higher order expectations. The frameworkis used to demonstrate that the well-known problem of the infinite regress of expectationsidentified by Townsend (1983) can be approximated to an arbitrary accuracy with a finitedimensional representation under quite general conditions. The paper is constructive andpresents a fixed point algorithm for finding an accurate solution and provides weak conditions that ensure that a fixed point exists. To help intuition, Singleton's (1987) asset pricingmodel with disparately informed traders is used as a vehicle for the paper.
Resumo:
This paper extends existing insurance results on the type of insurance contracts needed for insurance market efficiency toa dynamic setting. It introduces continuosly open markets that allow for more efficient asset allocation. It alsoeliminates the role of preferences and endowments in the classification of risks, which is done primarily in terms of the actuarial properties of the underlying riskprocess. The paper further extends insurability to include correlated and catstrophic events. Under these very general conditions the paper defines a condition that determines whether a small number of standard insurance contracts (together with aggregate assets) suffice to complete markets or one needs to introduce such assets as mutual insurance.
Resumo:
We incorporate the process of enforcement learning by assuming that the agency's current marginal cost is a decreasing function of its past experience of detecting and convicting. The agency accumulates data and information (on criminals, on opportunities of crime) enhancing the ability to apprehend in the future at a lower marginal cost.We focus on the impact of enforcement learning on optimal stationary compliance rules. In particular, we show that the optimal stationary fine could be less-than-maximal and the optimal stationary probability of detection could be higher-than-otherwise.
Resumo:
Climate science indicates that climate stabilization requires low GHG emissions. Is thisconsistent with nondecreasing human welfare?Our welfare or utility index emphasizes education, knowledge, and the environment. Weconstruct and calibrate a multigenerational model with intertemporal links provided by education,physical capital, knowledge and the environment.We reject discounted utilitarianism and adopt, first, the Pure Sustainability Optimization (orIntergenerational Maximin) criterion, and, second, the Sustainable Growth Optimization criterion,that maximizes the utility of the first generation subject to a given future rate of growth. We applythese criteria to our calibrated model via a novel algorithm inspired by the turnpike property.The computed paths yield levels of utility higher than the level at reference year 2000 for allgenerations. They require the doubling of the fraction of labor resources devoted to the creation ofknowledge relative to the reference level, whereas the fractions of labor allocated to consumptionand leisure are similar to the reference ones. On the other hand, higher growth rates requiresubstantial increases in the fraction of labor devoted to education, together with moderate increasesin the fractions of labor devoted to knowledge and the investment in physical capital.
Resumo:
A new parameter is introduced: the lightning potential index (LPI), which is a measure of the potential for charge generation and separation that leads to lightning flashes in convective thunderstorms. The LPI is calculated within the charge separation region of clouds between 0 C and 20 C, where the noninductive mechanism involving collisions of ice and graupel particles in the presence of supercooled water is most effective. As shown in several case studies using the Weather Research and Forecasting (WRF) model with explicit microphysics, the LPI is highly correlated with observed lightning. It is suggested that the LPI may be a useful parameter for predicting lightning as well as a tool for improving weather forecasting of convective storms and heavy rainfall.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.