979 resultados para first order transition system
Resumo:
This paper studies the relationship between the amount of publicinformation that stock market prices incorporate and the equilibriumbehavior of market participants. The analysis is framed in a static, NREEsetup where traders exchange vectors of assets accessing multidimensionalinformation under two alternative market structures. In the first(the unrestricted system), both informed and uninformed speculators cancondition their demands for each traded asset on all equilibrium prices;in the second (the restricted system), they are restricted to conditiontheir demand on the price of the asset they want to trade. I show thatinformed traders incentives to exploit multidimensional privateinformation depend on the number of prices they can condition upon whensubmitting their demand schedules, and on the specific price formationprocess one considers. Building on this insight, I then give conditionsunder which the restricted system is more efficient than the unrestrictedsystem.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
To recover a version of Barro's (1979) `random walk'tax smoothing outcome, we modify Lucas and Stokey's (1983) economyto permit only risk--free debt. This imparts near unit root like behaviorto government debt, independently of the government expenditureprocess, a realistic outcome in the spirit of Barro's. We showhow the risk--free--debt--only economy confronts the Ramsey plannerwith additional constraints on equilibrium allocations thattake the form of a sequence of measurability conditions.We solve the Ramsey problem by formulating it in terms of a Lagrangian,and applying a Parameterized Expectations Algorithm tothe associated first--order conditions. The first--order conditions andnumerical impulse response functions partially affirmBarro's random walk outcome. Though the behaviors oftax rates, government surpluses, and government debts differ, allocationsare very close for computed Ramsey policies across incomplete and completemarkets economies.
Resumo:
We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.
Resumo:
Nominal Unification is an extension of first-order unification where terms can contain binders and unification is performed modulo α equivalence. Here we prove that the existence of nominal unifiers can be decided in quadratic time. First, we linearly-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quadràtic time. Finally, we point out how using ideas of Brown and Tarjan for unbalanced merging, we could solve these reduced problems more efficiently
Resumo:
To study different temporal components on cancer mortality (age, period and cohort) methods of graphic representation were applied to Swiss mortality data from 1950 to 1984. Maps using continuous slopes ("contour maps") and based on eight tones of grey according to the absolute distribution of rates were used to represent the surfaces defined by the matrix of various age-specific rates. Further, progressively more complex regression surface equations were defined, on the basis of two independent variables (age/cohort) and a dependent one (each age-specific mortality rate). General patterns of trends in cancer mortality were thus identified, permitting definition of important cohort (e.g., upwards for lung and other tobacco-related neoplasms, or downwards for stomach) or period (e.g., downwards for intestines or thyroid cancers) effects, besides the major underlying age component. For most cancer sites, even the lower order (1st to 3rd) models utilised provided excellent fitting, allowing immediate identification of the residuals (e.g., high or low mortality points) as well as estimates of first-order interactions between the three factors, although the parameters of the main effects remained still undetermined. Thus, the method should be essentially used as summary guide to illustrate and understand the general patterns of age, period and cohort effects in (cancer) mortality, although they cannot conceptually solve the inherent problem of identifiability of the three components.
Resumo:
Kinetic studies on soil potassium release can contribute to a better understanding of K availability to plants. This study was conducted to evaluate K release rates from the whole soil, clay, silt, and sand fractions of B-horizon samples of a basalt-derived Oxisol and a sienite-derived Ultisol, both representative soils from coffee regions of Minas Gerais State, Brazil. Potassium was extracted from each fraction after eight different shaking time periods (0-665 h) with either 0.001 mol L-1 citrate or oxalate at a 1:10 solid:solution ratio. First-order, Elovich, zero-order, and parabolic diffusion equations were used to parameterize the time dependence of K release. For the Oxisol, the first-order equation fitted best to the experimental data of K release, with similar rates for all fractions and independent of the presence of citrate or oxalate in the extractant solution. For all studied Ultisol fractions, in which K release rates increased when extractions were performed with citrate solution, the Elovich model described K release kinetics most adequately. The highest potassium release rate of the Ultisol silt fraction was probably due to the transference of "non-exchangeable" K to the extractant solution, whereas in the Oxisol exchangeable potassium represented the main K source in all studied fractions.
Resumo:
The significance of thermal fluctuations in nucleation in structural first-order phase transitions has been examined. The prototypical case of martensitic transitions has been experimentally investigated by means of acoustic emission techniques. We propose a model based on the mean first-passage time to account for the experimental observations. Our study provides a unified framework to establish the conditions for isothermal and athermal transitions to be observed.
Resumo:
We consider noncentered vortices and their arrays in a cylindrically trapped Bose-Einstein condensate at zero temperature. We study the kinetic energy and the angular momentum per particle in the Thomas-Fermi regime and their dependence on the distance of the vortices from the center of the trap. Using a perturbative approach with respect to the velocity field of the vortices, we calculate, to first order, the frequency shift of the collective low-lying excitations due to the presence of an off-center vortex or a vortex array, and compare these results with predictions that would be obtained by the application of a simple sum-rule approach, previously found to be very successful for centered vortices. It turns out that the simple sum-rule approach fails for off-centered vortices.
Resumo:
BACKGROUND: Bone graft substitute such as calcium sulfate are frequently used as carrier material for local antimicrobial therapy in orthopedic surgery. This study aimed to assess the systemic absorption and disposition of tobramycin in patients treated with a tobramycin-laden bone graft substitute (Osteoset® T). METHODS: Nine blood samples were taken from 12 patients over 10 days after Osteoset® T surgical implantation. Tobramycin concentration was measured by fluorescence polarization. Population pharmacokinetic analysis was performed using NONMEM to assess the average value and variability (CV) of pharmacokinetic parameters. Bioavailability (F) was assessed by equating clearance (CL) with creatinine clearance (Cockcroft CLCr). Based on the final model, simulations with various doses and renal function levels were performed. (ClinicalTrials.gov number, NCT01938417). RESULTS: The patients were 52 +/- 20 years old, their mean body weight was 73 +/- 17 kg and their mean CLCr was 119 +/- 55 mL/min. Either 10 g or 20 g Osteoset® T with 4% tobramycin sulfate was implanted in various sites. Concentration profiles remained low and consistent with absorption rate-limited first-order release, while showing important variability. With CL equated to CLCr, mean absorption rate constant (ka) was 0.06 h-1, F was 63% or 32% (CV 74%) for 10 and 20 g Osteoset® T respectively, and volume of distribution (V) was 16.6 L (CV 89%). Simulations predicted sustained high, potentially toxic concentrations with 10 g, 30 g and 50 g Osteoset® T for CLCr values below 10, 20 and 30 mL/min, respectively. CONCLUSIONS: Osteoset® T does not raise toxicity concerns in subjects without significant renal failure. The risk/benefit ratio might turn unfavorable in case of severe renal failure, even after standard dose implantation.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
We present a phase-field model for the dynamics of the interface between two inmiscible fluids with arbitrary viscosity contrast in a rectangular Hele-Shaw cell. With asymptotic matching techniques we check the model to yield the right Hele-Shaw equations in the sharp-interface limit, and compute the corrections to these equations to first order in the interface thickness. We also compute the effect of such corrections on the linear dispersion relation of the planar interface. We discuss in detail the conditions on the interface thickness to control the accuracy and convergence of the phase-field model to the limiting Hele-Shaw dynamics. In particular, the convergence appears to be slower for high viscosity contrasts.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
BACKGROUND: Aminoglycosides are mandatory in the treatment of severe infections in burns. However, their pharmacokinetics are difficult to predict in critically ill patients. Our objective was to describe the pharmacokinetic parameters of high doses of tobramycin administered at extended intervals in severely burned patients. METHODS: We prospectively enrolled 23 burned patients receiving tobramycin in combination therapy for Pseudomonas species infections in a burn ICU over 2 years in a therapeutic drug monitoring program. Trough and post peak tobramycin levels were measured to adjust drug dosage. Pharmacokinetic parameters were derived from two points first order kinetics. RESULTS: Tobramycin peak concentration was 7.4 (3.1-19.6)microg/ml and Cmax/MIC ratio 14.8 (2.8-39.2). Half-life was 6.9 (range 1.8-24.6)h with a distribution volume of 0.4 (0.2-1.0)l/kg. Clearance was 35 (14-121)ml/min and was weakly but significantly correlated with creatinine clearance. CONCLUSION: Tobramycin had a normal clearance, but an increased volume of distribution and a prolonged half-life in burned patients. However, the pharmacokinetic parameters of tobramycin are highly variable in burned patients. These data support extended interval administration and strongly suggest that aminoglycosides should only be used within a structured pharmacokinetic monitoring program.
Resumo:
In this paper we find the quantities that are adiabatic invariants of any desired order for a general slowly time-dependent Hamiltonian. In a preceding paper, we chose a quantity that was initially an adiabatic invariant to first order, and sought the conditions to be imposed upon the Hamiltonian so that the quantum mechanical adiabatic theorem would be valid to mth order. [We found that this occurs when the first (m - 1) time derivatives of the Hamiltonian at the initial and final time instants are equal to zero.] Here we look for a quantity that is an adiabatic invariant to mth order for any Hamiltonian that changes slowly in time, and that does not fulfill any special condition (its first time derivatives are not zero initially and finally).