119 resultados para Count data models
Resumo:
While general equilibrium theories of trade stress the role of third-country effects, little work has been done in the empirical foreign direct investment (FDI) literature to test such spatial linkages. This paper aims to provide further insights into long-run determinants of Spanish FDI by considering not only bilateral but also spatially weighted third-country determinants. The few studies carried out so far have focused on FDI flows in a limited number of countries. However, Spanish FDI outflows have risen dramatically since 1995 and today account for a substantial part of global FDI. Therefore, we estimate recently developed Spatial Panel Data models by Maximum Likelihood (ML) procedures for Spanish outflows (1993-2004) to top-50 host countries. After controlling for unobservable effects, we find that spatial interdependence matters and provide evidence consistent with New Economic Geography (NEG) theories of agglomeration, mainly due to complex (vertical) FDI motivations. Spatial Error Models estimations also provide illuminating results regarding the transmission mechanism of shocks.
Resumo:
MicroEconometria és un paquet estadístic i economètric que contempla l’estimació de models uniequacionals: 1- Regressió simple i múltiple: anàlisi de residus, influència i atipicitat, diagnòstics de multicol·linealitat, estimació robusta, predicció, diagnòstics d’estabilitat, bootstrap. 2- Regressió en panell: efectes fixes, efectes aleatoris i efectes combinats. 3- Regressió lògit i probit. 4- Regressió censurada: tobit i model de selecció de Heckman. 5- Regressió multinomial. 6- Regressió poisson: model ‘count data’. 7- Índexs amb variables renda i riquesa i impostos transferències. Genera un informe per a cada una de les possibilitats contemplades que conté la presentació dels resultats de les estimacions, incloent les sortides gràfiques pertinents. L’input del programa és qualsevol base de dades, en la que es pugui identificar la variable endògena i les variables exògenes del model utilitzat, continguda en un llibre d’EXCEL de Microsoft.
Selection bias and unobservable heterogeneity applied at the wage equation of European married women
Resumo:
This paper utilizes a panel data sample selection model to correct the selection in the analysis of longitudinal labor market data for married women in European countries. We estimate the female wage equation in a framework of unbalanced panel data models with sample selection. The wage equations of females have several potential sources of.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
L'objectiu d'aquest PFC és la implementació d'una eina que s'encarregui de transformar un model de dades en un model de navegació complet i correcte. Per fer-ho, el programa WebRatio suporta completament el llenguatge WebML i s'utilitzarà com a dissenyador dels models de dades i també com a eina per comprovar els models de navegació generats.
Resumo:
The aim of this paper is to analyse empirically entry decisions by generic firms intomarkets with tough regulation. Generic drugs might be a key driver of competitionand cost containment in pharmaceutical markets. The dynamics of reforms ofpatents and pricing across drug markets in Spain are useful to identify the impact ofregulations on generic entry. Estimates from a count data model using a panel of 86active ingredients during the 1999 2005 period show that the drivers of genericentry in markets with price regulations are similar to less regulated markets: genericfirms entries are positively affected by the market size and time trend, and negativelyaffected by the number of incumbent laboratories and the number of substitutesactive ingredients. We also find that contrary to what policy makers expected, thesystem of reference pricing restrains considerably the generic entry. Short run brandname drug price reductions are obtained by governments at the cost of long runbenefits from fostering generic entry and post-patent competition into the markets.
Resumo:
Informal care is today the form of support most commonly used by those who need other peoplein order to carry out certain activities that are considered basic (eating, dressing, taking a shower,etc.), in Spain and in most other countries in the region. The possible labour opportunity costsincurred by these informal carers, the vast majority of whom are middle-aged women, have not asyet been properly quantified in Spain. It is, however, crucially important to know these quantities ata time when public authorities appear to be determined to extend the coverage offered up to nowas regards long-term care.In this context, we use the Spanish subsample of the European Community Household Panel (1994-2001) to estimate a dynamic ordered probit and so attempt to examine the effects of various typesof informal care on labour behaviour. The results obtained indicate the existence of labouropportunity costs for those women who live with the dependent person they care for, but not forthose who care for someone outside the household. Furthermore, whereas caregiving for morethan a year has negative effects on labour force participation, the same cannot be said of those who start caregiving and stop caregiving .
Resumo:
In this paper we present the ViRVIG Institute, a recently created institution that joins two well-known research groups: MOVING in Barcelona, and GGG in Girona. Our main research topics are Virtual Reality devices and interaction techniques, complex data models, realistic materials and lighting, geometry processing, and medical image visualization. We briefly introduce the history of both research groups and present some representative projects. Finally, we sketch our lines for future research
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
Report for the scientific sojourn carried out at the University of New South Wales from February to June the 2007. Two different biogeochemical models are coupled to a three dimensional configuration of the Princeton Ocean Model (POM) for the Northwestern Mediterranean Sea (Ahumada and Cruzado, 2007). The first biogeochemical model (BLANES) is the three-dimensional version of the model described by Bahamon and Cruzado (2003) and computes the nitrogen fluxes through six compartments using semi-empirical descriptions of biological processes. The second biogeochemical model (BIOMEC) is the biomechanical NPZD model described in Baird et al. (2004), which uses a combination of physiological and physical descriptions to quantify the rates of planktonic interactions. Physical descriptions include, for example, the diffusion of nutrients to phytoplankton cells and the encounter rate of predators and prey. The link between physical and biogeochemical processes in both models is expressed by the advection-diffusion of the non-conservative tracers. The similarities in the mathematical formulation of the biogeochemical processes in the two models are exploited to determine the parameter set for the biomechanical model that best fits the parameter set used in the first model. Three years of integration have been carried out for each model to reach the so called perpetual year run for biogeochemical conditions. Outputs from both models are averaged monthly and then compared to remote sensing images obtained from sensor MERIS for chlorophyll.
Resumo:
A method to estimate DSGE models using the raw data is proposed. The approachlinks the observables to the model counterparts via a flexible specification which doesnot require the model-based component to be solely located at business cycle frequencies,allows the non model-based component to take various time series patterns, andpermits model misspecification. Applying standard data transformations induce biasesin structural estimates and distortions in the policy conclusions. The proposed approachrecovers important model-based features in selected experimental designs. Twowidely discussed issues are used to illustrate its practical use.
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.
Resumo:
In this correspondence, we propose applying the hiddenMarkov models (HMM) theory to the problem of blind channel estimationand data detection. The Baum–Welch (BW) algorithm, which is able toestimate all the parameters of the model, is enriched by introducingsome linear constraints emerging from a linear FIR hypothesis on thechannel. Additionally, a version of the algorithm that is suitable for timevaryingchannels is also presented. Performance is analyzed in a GSMenvironment using standard test channels and is found to be close to thatobtained with a nonblind receiver.
Resumo:
Context. The understanding of Galaxy evolution can be facilitated by the use of population synthesis models, which allow to test hypotheses on the star formation history, star evolution, as well as chemical and dynamical evolution of the Galaxy. Aims. The new version of the Besanc¸on Galaxy Model (hereafter BGM) aims to provide a more flexible and powerful tool to investigate the Initial Mass Function (IMF) and Star Formation Rate (SFR) of the Galactic disc. Methods. We present a new strategy for the generation of thin disc stars which assumes the IMF, SFR and evolutionary tracks as free parameters. We have updated most of the ingredients for the star count production and, for the first time, binary stars are generated in a consistent way. We keep in this new scheme the local dynamical self-consistency as in Bienayme et al (1987). We then compare simulations from the new model with Tycho-2 data and the local luminosity function, as a first test to verify and constrain the new ingredients. The effects of changing thirteen different ingredients of the model are systematically studied. Results. For the first time, a full sky comparison is performed between BGM and data. This strategy allows to constrain the IMF slope at high masses which is found to be close to 3.0, excluding a shallower slope such as Salpeter"s one. The SFR is found decreasing whatever IMF is assumed. The model is compatible with a local dark matter density of 0.011 M pc−3 implying that there is no compelling evidence for significant amount of dark matter in the disc. While the model is fitted to Tycho2 data, a magnitude limited sample with V<11, we check that it is still consistent with fainter stars. Conclusions. The new model constitutes a new basis for further comparisons with large scale surveys and is being prepared to become a powerful tool for the analysis of the Gaia mission data.