75 resultados para Correlated inventory models
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In this paper we argue that inventory models are probably not usefulmodels of household money demand because the majority of households does nothold any interest bearing assets. The relevant decision for most people is notthe fraction of assets to be held in interest bearing form, but whether to holdany of such assets at all. The implications of this realization are interesting and important. We find that(a) the elasticity of money demand is very small when the interest rate is small,(b) the probability that a household holds any amount of interest bearing assetsis positively related to the level of financial assets, and (c) the cost ofadopting financial technologies is positively related to age and negatively relatedto the level of education. Unlike the traditional methods of money demand estimation, our methodology allowsfor the estimation of the interest--elasticity at low values of the nominalinterest rate. The finding that the elasticity is very small for interest ratesbelow 5 percent suggests that the welfare costs of inflation are small. At interest rates of 6 percent, the elasticity is close to 0.5. We find thatroughly one half of this elasticity can be attributed to the Baumol--Tobin orintensive margin and half of it can be attributed to the new adopters or extensivemargin. The intensive margin is less important at lower interest rates and moreimportant at higher interest rates.
Resumo:
In this paper we develop two models for an inventory system in which the distributormanages the inventory at the retailers location. These type of systems correspondto the Vendor Managed Inventory (VMI) systems described ib the literature. Thesesystems are very common in many different types of industries, such as retailingand manufacturing, although assuming different characteristics.The objective of our model is to minimize total inventory cost for the distributorin a multi-period multi-retailer setting. The inventory system includes holdingand stock-out costs and we study the case whre an additional fixed setup cost ischarged per delivery.We construct a numerical experiment to analyze the model bahavior and observe theimpact of the characteristics of the model on the solutions.
Resumo:
We study a class of models of correlated random networks in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices. We find analytical expressions for the main topological properties of these models as a function of the distribution of hidden variables and the probability of connecting vertices. The expressions obtained are checked by means of numerical simulations in a particular example. The general model is extended to describe a practical algorithm to generate random networks with an a priori specified correlation structure. We also present an extension of the class, to map nonequilibrium growing networks to networks with hidden variables that represent the time at which each vertex was introduced in the system.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Mushroom picking has become a widespread autumn recreational activity in the Central Pyrenees and other regions of Spain. Predictive models that relate mushroom production or fungal species richness with forest stand and site characteristics are not available. This study used mushroom production data from 24 Scots pine plots over 3 years to develop a predictive model that could facilitate forest management decisions when comparing silvicultural options in terms of mushroom production. Mixed modelling was used to model the dependence of mushroom production on stand and site factors. The results showed that productions were greatest when stand basal area was approximately 20 m2 ha-1. Increasing elevation and northern aspect increased total mushroom production as well as the production of edible and marketed mushrooms. Increasing slope decreased productions. Marketed Lactarius spp., the most important group collected in the region, showed similar relationships. The annual variation in mushroom production correlated with autumn rainfall. Mushroom species richness was highest when the total production was highest.
Resumo:
The present study tests the relationships between the three frequently used personality models evaluated by the Temperament Character Inventory-Revised (TCI-R), Neuroticism Extraversion Openness Five Factor Inventory – Revised (NEO-FFI-R) and Zuckerman-Kuhlman Personality Questionnaire-50- Cross-Cultural (ZKPQ-50-CC). The results were obtained with a sample of 928 volunteer subjects from the general population aged between 17 and 28 years old. Frequency distributions and alpha reliabilities with the three instruments were acceptable. Correlational and factorial analyses showed that several scales in the three instruments share an appreciable amount of common variance. Five factors emerged from principal components analysis. The first factor was integrated by A (Agreeableness), Co (Cooperativeness) and Agg-Host (Aggressiveness-Hostility), with secondary loadings in C (Conscientiousness) and SD (Self-directiveness) from other factors. The second factor was composed by N (Neuroticism), N-Anx (Neuroticism-Anxiety), HA (Harm Avoidance) and SD (Self-directiveness). The third factor was integrated by Sy (Sociability), E (Extraversion), RD (Reward Dependence), ImpSS (Impulsive Sensation Seeking) and NS (novelty Seeking). The fourth factor was integrated by Ps (Persistence), Act (Activity), and C, whereas the fifth and last factor was composed by O (Openness) and ST (Self- Transcendence). Confirmatory factor analyses indicate that the scales in each model are highly interrelated and define the specified latent dimension well. Similarities and differences between these three instruments are further discussed.
Resumo:
En el presente trabajo se presenta una revisión sobre los modelos forestales desarrollados en España durante los últimos años, tanto para la producción maderable como no maderable y, para la dinámica de los bosques (regeneración, mortalidad). Se presentan modelos tanto de rodal completo como de clases diamétricas y de árbol individual. Los modelos desarrollados hasta la fecha se han desarrollado a partir de datos procedentes de parcelas permanentes, ensayos y el Inventario Forestal Nacional. En el trabajo se muestran los diferentes submodelos desarrollados hasta la fecha, así como las plataformas informáticas que permiten utilizar dichos modelos. Se incluyen las principales perspectivas de desarrollo de la modelización forestal en España.
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
We analyze the effects of uncertainty and private information on horizontal mergers. Firms face uncertain demands or costs and receive private signals. They may decide to merge sharing their private information. If the uncertainty parameters are independent and the signals are perfect, uncertainty generates an informational advantage only to the merging firms, increasing merger incentives and decreasing free-riding effects. Thus, mergers become more profitable and stable. These results generalize to the case of correlated parameters if the correlation is not very severe, and for perfect correlation if the firms receive noisy signals. From the normative point of view, mergers are socially less harmful compared to deterministic markets and may even be welfare enhancing. If the signals are, instead, publicly observed, uncertainty does not necessarily give more incentives to merge, and mergers are not always less socially harmful.
Resumo:
This paper develops a theory of the joint allocation of formal control and cash-flow rights in venture capital deals. We argue that when the need for investor support calls for very high-powered outside claims, entrepreneurs should optimally retain formal control in order to avoid excessive interference. Hence, we predict that risky claims should be be negatively correlated to control rights, both along the life of a start-up and across deals. This challenges the idea that risky claims should a ways be associated to more formal control, and is in line with contractual terms increasingly used in venture capital, in corporate venturing and in partnership deals between biotech start-ups and large drug companies. The paper provides a theoretical explanation to some puzzling evidence documented in Gompers (1997) and Kaplan and Stromberg (2000), namely the inclusion in venture capital contracts of contingencies that trigger both a reduction in VC control and the conversion! of her preferred stocks into common stocks.
Resumo:
Expectations are central to behaviour. Despite the existence of subjective expectations data, the standard approach is to ignore these, to hypothecate a model of behaviour and to infer expectations from realisations. In the context of income models, we reveal the informational gain obtained from using both a canonical model and subjective expectations data. We propose a test for this informational gain, and illustrate our approach with an application to the problem of measuring income risk.