110 resultados para rate equation model
Resumo:
[cat] En aquest treball s'analitza un model estocàstic en temps continu en el que l'agent decisor descompta les utilitats instantànies i la funció final amb taxes de preferència temporal constants però diferents. En aquest context es poden modelitzar problemes en els quals, quan el temps s'acosta al moment final, la valoració de la funció final incrementa en comparació amb les utilitats instantànies. Aquest tipus d'asimetria no es pot descriure ni amb un descompte estàndard ni amb un variable. Per tal d'obtenir solucions consistents temporalment es deriva l'equació de programació dinàmica estocàstica, les solucions de la qual són equilibris Markovians. Per a aquest tipus de preferències temporals, s'estudia el model clàssic de consum i inversió (Merton, 1971) per a les funcions d'utilitat del tipus CRRA i CARA, comparant els equilibris Markovians amb les solucions inconsistents temporalment. Finalment es discuteix la introducció del temps final aleatori.
Resumo:
This paper analyzes the issue of the interiority of the optimal population growth rate in a two-period overlapping generations model with endogenous fertility. Using Cobb-Douglas utility and production functions, we show that the introduction of a cost of raising children allows for the possibility of the existence of an interior global maximum in the planner¿s problem, contrary to the exogenous fertility case
Exploring the rate-limiting steps in visual phototransduction recovery by bottom-up kinetic modeling
Resumo:
Phototransduction in vertebrate photoreceptor cells represents a paradigm of signaling pathways mediated by G-protein-coupled receptors (GPCRs), which share common modules linking the initiation of the cascade to the final response of the cell. In this work, we focused on the recovery phase of the visual photoresponse, which is comprised of several interacting mechanisms. We employed current biochemical knowledge to investigate the response mechanisms of a comprehensive model of the visual phototransduction pathway. In particular, we have improved the model by implementing a more detailed representation of the recoverin (Rec)-mediated calcium feedback on rhodopsin kinase and including a dynamic arrestin (Arr) oligomerization mechanism. The model was successfully employed to investigate the rate limiting steps in the recovery of the rod photoreceptor cell after illumination. Simulation of experimental conditions in which the expression levels of rhodospin kinase (RK), of the regulator of the G-protein signaling (RGS), of Arr and of Rec were altered individually or in combination revealed severe kinetic constraints to the dynamics of the overall network. Our simulations confirm that RGS-mediated effector shutdown is the rate-limiting step in the recovery of the photoreceptor and show that the dynamic formation and dissociation of Arr homodimers and homotetramers at different light intensities significantly affect the timing of rhodopsin shutdown. The transition of Arr from its oligomeric storage forms to its monomeric form serves to temper its availability in the functional state. Our results may explain the puzzling evidence that overexpressing RK does not influence the saturation time of rod cells at bright light stimuli. The approach presented here could be extended to the study of other GPCR signaling pathways.
Resumo:
The main goal of this article is to provide an answer to the question: "Does anything forecast exchange rates, and if so, which variables?". It is well known thatexchange rate fluctuations are very difficult to predict using economic models, andthat a random walk forecasts exchange rates better than any economic model (theMeese and Rogoff puzzle). However, the recent literature has identified a series of fundamentals/methodologies that claim to have resolved the puzzle. This article providesa critical review of the recent literature on exchange rate forecasting and illustratesthe new methodologies and fundamentals that have been recently proposed in an up-to-date, thorough empirical analysis. Overall, our analysis of the literature and thedata suggests that the answer to the question: "Are exchange rates predictable?" is,"It depends" -on the choice of predictor, forecast horizon, sample period, model, andforecast evaluation method. Predictability is most apparent when one or more of thefollowing hold: the predictors are Taylor rule or net foreign assets, the model is linear, and a small number of parameters are estimated. The toughest benchmark is therandom walk without drift.
Resumo:
We study the gains from increased wage flexibility and their dependence on exchange rate policy, using a small open economy model with staggered price andwage setting. Two results stand out: (i) the impact of wage adjustments on employment is smaller the more the central bank seeks to stabilize the exchange rate,and (ii) an increase in wage flexibility often reduces welfare, and more likely so ineconomies under an exchange rate peg or an exchange rate-focused monetary policy.Our findings call into question the common view that wage flexibility is particularlydesirable in a currency union.
Resumo:
Recent single-cell studies in monkeys (Romo et al., 2004) show that the activity of neurons in the ventral premotor cortex covaries with the animal's decisions in a perceptual comparison task regarding the frequency of vibrotactile events. The firing rate response of these neurons was dependent only on the frequency differences between the two applied vibrations, the sign of that difference being the determining factor for correct task performance. We present a biophysically realistic neurodynamical model that can account for the most relevant characteristics of this decision-making-related neural activity. One of the nontrivial predictions of this model is that Weber's law will underlie the perceptual discrimination behavior. We confirmed this prediction in behavioral tests of vibrotactile discrimination in humans and propose a computational explanation of perceptual discrimination that accounts naturally for the emergence of Weber's law. We conclude that the neurodynamical mechanisms and computational principles underlying the decision-making processes in this perceptual discrimination task are consistent with a fluctuation-driven scenario in a multistable regime.
Resumo:
This paper analyzes the asymptotic performance of maximum likelihood (ML) channel estimation algorithms in wideband code division multiple access (WCDMA) scenarios. We concentrate on systems with periodic spreading sequences (period larger than or equal to the symbol span) where the transmitted signal contains a code division multiplexed pilot for channel estimation purposes. First, the asymptotic covariances of the training-only, semi-blind conditional maximum likelihood (CML) and semi-blind Gaussian maximum likelihood (GML) channelestimators are derived. Then, these formulas are further simplified assuming randomized spreading and training sequences under the approximation of high spreading factors and high number of codes. The results provide a useful tool to describe the performance of the channel estimators as a function of basicsystem parameters such as number of codes, spreading factors, or traffic to training power ratio.
Resumo:
Context. The understanding of Galaxy evolution can be facilitated by the use of population synthesis models, which allow to test hypotheses on the star formation history, star evolution, as well as chemical and dynamical evolution of the Galaxy. Aims. The new version of the Besanc¸on Galaxy Model (hereafter BGM) aims to provide a more flexible and powerful tool to investigate the Initial Mass Function (IMF) and Star Formation Rate (SFR) of the Galactic disc. Methods. We present a new strategy for the generation of thin disc stars which assumes the IMF, SFR and evolutionary tracks as free parameters. We have updated most of the ingredients for the star count production and, for the first time, binary stars are generated in a consistent way. We keep in this new scheme the local dynamical self-consistency as in Bienayme et al (1987). We then compare simulations from the new model with Tycho-2 data and the local luminosity function, as a first test to verify and constrain the new ingredients. The effects of changing thirteen different ingredients of the model are systematically studied. Results. For the first time, a full sky comparison is performed between BGM and data. This strategy allows to constrain the IMF slope at high masses which is found to be close to 3.0, excluding a shallower slope such as Salpeter"s one. The SFR is found decreasing whatever IMF is assumed. The model is compatible with a local dark matter density of 0.011 M pc−3 implying that there is no compelling evidence for significant amount of dark matter in the disc. While the model is fitted to Tycho2 data, a magnitude limited sample with V<11, we check that it is still consistent with fainter stars. Conclusions. The new model constitutes a new basis for further comparisons with large scale surveys and is being prepared to become a powerful tool for the analysis of the Gaia mission data.
Resumo:
Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.
Resumo:
The effect of the heat flux on the rate of chemical reaction in dilute gases is shown to be important for reactions characterized by high activation energies and in the presence of very large temperature gradients. This effect, obtained from the second-order terms in the distribution function (similar to those obtained in the Burnett approximation to the solution of the Boltzmann equation), is derived on the basis of information theory. It is shown that the analytical results describing the effect are simpler if the kinetic definition for the nonequilibrium temperature is introduced than if the thermodynamic definition is introduced. The numerical results are nearly the same for both definitions
Resumo:
A network of twenty stakes was set up on Johnsons Glacier in order to determine its dynamics. During the austral summers from 1994-95 to 1997-98, we estimated surface velocities, mass balances and ice thickness variations. Horizontal velocity increased dow nstream from 1 m a- 1 near the ice divides to 40 m a- 1 near the ice terminus. The accumulation zone showed low accumulation rates (maximum of 0,6 m a- 1 (ice)), whereas in the lower part of the glacier, ablation rates were 4,3 m a- 1 (ice). Over the 3-year study period, both in the accumulation and ablation zones, we detected a reduction in the ice surface level ranging from 2 to 10 m from the annual ve rt ical velocities and ice-thinning data, the mass balance was obtained and compared with the mass balance field values, resulting in similar estimates. Flux values were calculated using cross-section data and horizontal velocities, and compared with the results obtained by means of mass balance and ice thinning data using the continuity equation. The two methods gave similar results.
Resumo:
Objectives: Publication bias may affect the validity of evidence based medical decisions. The aim of this study is to assess whether research outcomes affect the dissemination of clinical trial findings, in terms of rate, time to publication, and impact factor of journal publications. Methods and Findings: All drug-evaluating clinical trials submitted to and approved by a general hospital ethics committee between 1997 and 2004 were prospectively followed to analyze their fate and publication. Published articles were identified by searching Pubmed and other electronic databases. Clinical study final reports submitted to the ethics committee, final reports synopses available online and meeting abstracts were also considered as sources of study results. Study outcomes were classified as positive (when statistical significance favoring experimental drug was achieved), negative (when no statistical significance was achieved or it favored control drug) and descriptive (for non-controlled studies). Time to publication was defined as time from study closure to publication. A survival analysis was performed using a Cox regression model to analyze time to publication. Journal impact factors of identified publications were recorded. Publication rate was 48·4% (380/785). Study results were identified for 68·9% of all completed clinical trials (541/785). Publication rate was 84·9% (180/212) for studies with results classified as positive and 68·9% (128/186) for studies with results classified as negative (p<0·001). Median time to publication was 2·09 years (IC95 1·61-2·56) for studies with results classified as positive and 3·21 years (IC95 2·69-3·70) for studies with results classified as negative (hazard ratio 1·99 (IC95 1·55-2·55). No differences were found in publication impact factor between positive (median 6·308, interquartile range: 3·141-28·409) and negative result studies (median 8·266, interquartile range: 4·135-17·157). Conclusions: Clinical trials with positive outcomes have significantly higher rates and shorter times to publication than those with negative results. However, no differences have been found in terms of impact factor.
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
[cat] En aquest treball s'analitza l'efecte que comporta l'introducció de preferències inconsistents temporalment sobre les decisions òptimes de consum, inversió i compra d'assegurança de vida. En concret, es pretén recollir la creixent importància que un individu dóna a la herència que deixa i a la riquesa disponible per a la seva jubilació al llarg de la seva vida laboral. Amb aquesta finalitat, es parteix d'un model estocàstic en temps continu amb temps final aleatori, i s'introdueix el descompte heterogeni, considerant un agent amb una distribució de vida residual coneguda. Per tal d'obtenir solucions consistents temporalment es resol una equació de programació dinàmica no estàndard. Per al cas de funcions d'utilitat del tipus CRRA i CARA es troben solucions explícites. Finalment, els resultats obtinguts s'il·lustren numèricament.
Resumo:
[cat] En aquest treball s'analitza l'efecte que comporta l'introducció de preferències inconsistents temporalment sobre les decisions òptimes de consum, inversió i compra d'assegurança de vida. En concret, es pretén recollir la creixent importància que un individu dóna a la herència que deixa i a la riquesa disponible per a la seva jubilació al llarg de la seva vida laboral. Amb aquesta finalitat, es parteix d'un model estocàstic en temps continu amb temps final aleatori, i s'introdueix el descompte heterogeni, considerant un agent amb una distribució de vida residual coneguda. Per tal d'obtenir solucions consistents temporalment es resol una equació de programació dinàmica no estàndard. Per al cas de funcions d'utilitat del tipus CRRA i CARA es troben solucions explícites. Finalment, els resultats obtinguts s'il·lustren numèricament.