906 resultados para Incidence function model
Resumo:
A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
Resumo:
OBJECTIVE: Higher levels of the novel inflammatory marker pentraxin 3 (PTX3) predict cardiovascular mortality in patients with chronic kidney disease (CKD). Yet, whether PTX3 predicts worsening of kidney function has been less well studied. We therefore investigated the associations between PTX3 levels, kidney disease measures and CKD incidence. METHODS: Cross-sectional associations between serum PTX3 levels, urinary albumin/creatinine ratio (ACR) and cystatin C-estimated glomerular filtration rate (GFR) were assessed in two independent community-based cohorts of elderly subjects: the Prospective Investigation of the Vasculature in Uppsala Seniors (PIVUS, n = 768, 51% women, mean age 75 years) and the Uppsala Longitudinal Study of Adult Men (ULSAM, n = 651, mean age 77 years). The longitudinal association between PTX3 level at baseline and incident CKD (GFR <60 mL( ) min(-1) 1.73 m(-) ²) was also analysed (number of events/number at risk: PIVUS 229/746, ULSAM 206/315). RESULTS: PTX3 levels were inversely associated with GFR [PIVUS: B-coefficient per 1 SD increase -0.16, 95% confidence interval (CI) -0.23 to -0.10, P < 0.001; ULSAM: B-coefficient per 1 SD increase -0.09, 95% CI -0.16 to -0.01, P < 0.05], but not ACR, after adjusting for age, gender, C-reactive protein and prevalent cardiovascular disease in cross-sectional analyses. In longitudinal analyses, PTX3 levels predicted incident CKD after 5 years in both cohorts [PIVUS: multivariable odds ratio (OR) 1.21, 95% CI 1.01-1.45, P < 0.05; ULSAM: multivariable OR 1.37, 95% CI 1.07-1.77, P < 0.05]. CONCLUSIONS: Higher PTX3 levels are associated with lower GFR and independently predict incident CKD in elderly men and women. Our data confirm and extend previous evidence suggesting that inflammatory processes are activated in the early stages of CKD and drive impairment of kidney function. Circulating PTX3 appears to be a promising biomarker of kidney disease.
Resumo:
Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
Resumo:
The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.
Resumo:
A three-dimensional time-dependent hydrodynamic and heat transport model of Lake Binaba, a shallow and small dam reservoir in Ghana, emphasizing the simulation of dynamics and thermal structure has been developed. Most numerical studies of temperature dynamics in reservoirs are based on one- or two-dimensional models. These models are not applicable for reservoirs characterized with complex flow pattern and unsteady heat exchange between the atmosphere and water surface. Continuity, momentum and temperature transport equations have been solved. Proper assignment of boundary conditions, especially surface heat fluxes, has been found crucial in simulating the lake’s hydrothermal dynamics. This model is based on the Reynolds Average Navier-Stokes equations, using a Boussinesq approach, with a standard k − ε turbulence closure to solve the flow field. The thermal model includes a heat source term, which takes into account the short wave radiation and also heat convection at the free surface, which is function of air temperatures, wind velocity and stability conditions of atmospheric boundary layer over the water surface. The governing equations of the model have been solved by OpenFOAM; an open source, freely available CFD toolbox. As its core, OpenFOAM has a set of efficient C++ modules that are used to build solvers. It uses collocated, polyhedral numerics that can be applied on unstructured meshes and can be easily extended to run in parallel. A new solver has been developed to solve the hydrothermal model of lake. The simulated temperature was compared against a 15 days field data set. Simulated and measured temperature profiles in the probe locations show reasonable agreement. The model might be able to compute total heat storage of water bodies to estimate evaporation from water surface.
Resumo:
This paper investigates the income inequality generated by a jobsearch process when di§erent cohorts of homogeneous workers are allowed to have di§erent degrees of impatience. Using the fact the average wage under the invariant Markovian distribution is a decreasing function of the discount factor (Cysne (2004, 2006)), I show that the Lorenz curve and the between-cohort Gini coe¢ cient of income inequality can be easily derived in this case. An example with arbitrary measures regarding the wage o§ers and the distribution of time preferences among cohorts provides some insights into how much income inequality can be generated, and into how it varies as a function of the probability of unemployment and of the probability that the worker does not Önd a job o§er each period.
Resumo:
This paper explores the use of an intertemporal job-search model in the investigation of within-cohort and between-cohort income inequality, the latter being generated by the heterogeneity of time preferences among cohorts of homogenous workers and the former by the cross-sectional turnover in the job market. It also offers an alternative explanation for the empirically-documented negative correlation between time preference and labor income. Under some speciÖc distributions regarding wage offers and time preferences, we show how the within-cohort and between-cohort Gini coe¢ cients of income distribution can be calculated, and how they vary as a function of the parameters of the model.
Resumo:
This paper is a theoretica1 and empirica1 study of the re1ationship between indexing po1icy and feedback mechanisms in the inflationary adjustment process in Brazil. The focus of our study is on two policy issues: (1) did the Brazilian system of indexing of interest rates, the exchange rate, and wages make inflation so dependent on its own past values that it created a significant feedback process and inertia in the behaviour of inflation in and (2) was the feedback effect of past inf1ation upon itself so strong that dominated the effect of monetary/fiscal variables upon current inflation? This paper develops a simple model designed to capture several "stylized facts" of Brazi1ian indexing po1icy. Separate ru1es of "backward indexing" for interest rates, the exchange rate, and wages, reflecting the evolution of po1icy changes in Brazil, are incorporated in a two-sector model of industrial and agricultural prices. A transfer function derived irom this mode1 shows inflation depending on three factors: (1) past values of inflation, (2) monetary and fiscal variables, and (3) supply- .shock variables. The indexing rules for interest rates, the exchange rate, and wages place restrictions on the coefficients of the transfer function. Variations in the policy-determined parameters of the indexing rules imply changes in the coefficients of the transfer function for inflation. One implication of this model, in contrast to previous results derived in analytically simpler models of indexing, is that a higher degree of indexing does not make current inflation more responsive to current monetary shocks. The empirical section of this paper studies the central hypotheses of this model through estimation of the inflation transfer function with time-varying parameters. The results show a systematic non-random variation of the transfer function coefficients closely synchronized with changes in the observed values of the wage-indexing parameters. Non-parametric tests show the variation of the transfer function coefficients to be statistically significant at the time of the changes in wage indexing rules in Brazil. As the degree of indexing increased, the inflation feadback coefficients increased, while the effect of external price and agricultura shocs progressively increased and monetary effects progressively decreased.
Resumo:
Verdelhan (2009) mostra que desejando-se explicar o comporta- mento do prêmio de risco nos mercados de títulos estrangeiros usando- se o modelo de formação externa de hábitos proposto por Campbell e Cochrane (1999) será necessário especi car o retorno livre de risco de equilíbrio de maneira pró-cíclica. Mostramos que esta especi cação só é possível sobre parâmetros de calibração implausíveis. Ainda no processo de calibração, para a maioria dos parâmetros razoáveis, a razão preço-consumo diverge. Entretanto, adotando a sugestão pro- posta por Verdelhan (2009) - de xar a função sensibilidade (st) no seu valor de steady-state durante a calibração e liberá-la apenas du- rante a simulação dos dados para se garantir taxas livre de risco pró- cíclicas - conseguimos encontrar um valor nito e bem comportado para a razão preço-consumo de equilíbrio e replicar o foward premium anom- aly. Desconsiderando possíveis inconsistências deste procedimento, so- bre retornos livres de risco pró-cíclicos, conforme sugerido por Wachter (2006), o modelo utilizado gera curvas de yields reais decrescentes na maturidade, independentemente do estado da economia - resultado que se opõe à literatura subjacente e aos dados reais sobre yields.
Resumo:
Verdelhan (2009) shows that if one is to explain the foreign exchange forward premium behavior using Campbell and Cochrane (1999)’s habit formation model one must specify it in such a way to generate pro-cyclical short term risk free rates. At the calibration procedure, we show that this is only possible in Campbell and Cochrane’s framework under implausible parameters specifications given that the price-consumption ratio diverges in almost all parameters sets. We, then, adopt Verdelhan’s shortcut of fixing the sensivity function λ(st) at its steady state level to attain a finite value for the price-consumption ratio and release it in the simulation stage to ensure pro-cyclical risk free rates. Beyond the potential inconsistencies that such procedure may generate, as suggested by Wachter (2006), with procyclical risk free rates the model generates a downward sloped real yield curve, which is at odds with the data.
Resumo:
Cognition is a core subject to understand how humans think and behave. In that sense, it is clear that Cognition is a great ally to Management, as the later deals with people and is very interested in how they behave, think, and make decisions. However, even though Cognition shows great promise as a field, there are still many topics to be explored and learned in this fairly new area. Kemp & Tenembaum (2008) tried to a model graph-structure problem in which, given a dataset, the best underlying structure and form would emerge from said dataset by using bayesian probabilistic inferences. This work is very interesting because it addresses a key cognition problem: learning. According to the authors, analogous insights and discoveries, understanding the relationships of elements and how they are organized, play a very important part in cognitive development. That is, this are very basic phenomena that allow learning. Human beings minds do not function as computer that uses bayesian probabilistic inferences. People seem to think differently. Thus, we present a cognitively inspired method, KittyCat, based on FARG computer models (like Copycat and Numbo), to solve the proposed problem of discovery the underlying structural-form of a dataset.
Resumo:
We develop an affine jump diffusion (AJD) model with the jump-risk premium being determined by both idiosyncratic and systematic sources of risk. While we maintain the classical affine setting of the model, we add a finite set of new state variables that affect the paths of the primitive, under both the actual and the risk-neutral measure, by being related to the primitive's jump process. Those new variables are assumed to be commom to all the primitives. We present simulations to ensure that the model generates the volatility smile and compute the "discounted conditional characteristic function'' transform that permits the pricing of a wide range of derivatives.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Spiking neural networks - networks that encode information in the timing of spikes - are arising as a new approach in the artificial neural networks paradigm, emergent from cognitive science. One of these new models is the pulsed neural network with radial basis function, a network able to store information in the axonal propagation delay of neurons. Learning algorithms have been proposed to this model looking for mapping input pulses into output pulses. Recently, a new method was proposed to encode constant data into a temporal sequence of spikes, stimulating deeper studies in order to establish abilities and frontiers of this new approach. However, a well known problem of this kind of network is the high number of free parameters - more that 15 - to be properly configured or tuned in order to allow network convergence. This work presents for the first time a new learning function for this network training that allow the automatic configuration of one of the key network parameters: the synaptic weight decreasing factor.
Resumo:
A model is presented for the respiratory heat loss in sheep, considering both the sensible heat lost by convection (C-R) and the latent heat eliminated by evaporation (E-R). A practical method is described for the estimation of the tidal volume as a function of the respiratory rate. Equations for C-R and E-R are developed and the relative importance of both heat transfer mechanisms is discussed. At air temperatures up to 30 degreesC sheep have the least respiratory heat loss at air vapour pressures above 1.6 kPa. At an ambient temperature of 40 degreesC respiratory loss of sensible heat can be nil; for higher temperatures the transfer by convection is negative and thus heat is gained. Convection is a mechanism of minor importance for the respiratory heat transfer in sheep at environmental temperatures above 30 degreesC. These observations show the importance of respiratory latent heat loss for thermoregulation of sheep in hot climates.