26 resultados para Empirical Best Linear Unbiased Predictor
Resumo:
We explain the empirical linear relations between the triplet scattering length, or the asymptotic normalization constant, and the deuteron matter radius using the effective range expansion in a manner similar to a recent paper by Bhaduri et al. We emphasize the corrections due to the finite force range and to shape dependence. The discrepancy between the experimental values and the empirical line shows the need for a larger value of the wound extension, a parameter which we introduce here. Short-distance nonlocality of the n-p interaction is a plausible explanation for the discrepancy.
Resumo:
The present work deals with quantifying group characteristics. Specifically, dyadic measures of interpersonal perceptions were used to forecast group performance. 46 groups of students, 24 of four and 22 of five people, were studied in a real educational assignment context and marks were gathered as an indicator of group performance. Our results show that dyadic measures of interpersonal perceptions account for final marks. By means of linear regression analysis 85% and 85.6% of group performance was respectively explained for group sizes equal to four and five. Results found in the scientific literature based on the individualistic approach are no larger than 18%. The results of the present study support the utility of dyadic approaches for predicting group performance in social contexts.
Resumo:
The main goal of this article is to provide an answer to the question: "Does anything forecast exchange rates, and if so, which variables?". It is well known thatexchange rate fluctuations are very difficult to predict using economic models, andthat a random walk forecasts exchange rates better than any economic model (theMeese and Rogoff puzzle). However, the recent literature has identified a series of fundamentals/methodologies that claim to have resolved the puzzle. This article providesa critical review of the recent literature on exchange rate forecasting and illustratesthe new methodologies and fundamentals that have been recently proposed in an up-to-date, thorough empirical analysis. Overall, our analysis of the literature and thedata suggests that the answer to the question: "Are exchange rates predictable?" is,"It depends" -on the choice of predictor, forecast horizon, sample period, model, andforecast evaluation method. Predictability is most apparent when one or more of thefollowing hold: the predictors are Taylor rule or net foreign assets, the model is linear, and a small number of parameters are estimated. The toughest benchmark is therandom walk without drift.
Resumo:
Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.
Resumo:
Artifacts are present in most of the electroencephalography (EEG) recordings, making it difficult to interpret or analyze the data. In this paper a cleaning procedure based on a multivariate extension of empirical mode decomposition is used to improve the quality of the data. This is achieved by applying the cleaning method to raw EEG data. Then, a synchrony measure is applied on the raw and the clean data in order to compare the improvement of the classification rate. Two classifiers are used, linear discriminant analysis and neural networks. For both cases, the classification rate is improved about 20%.
Resumo:
We provide robust and compelling evidence of the marked impact of tertiary education on the economic growth of less developed countries and of its the relatively smaller impact on the growth of developed ones. Our results argue in favor of the accumulation of high skill levels especially in technologically under-developed countries and, contrary to common wisdom, independently of the fact that these economies might initially produce lower-technology goods or perform technology imitation. Our results are robust to the different measures used in proxying human capital and to the adjustments made for cross-country differences in the quality of education. Country-specific institutional quality, as well as other indicators including legal origin, religious fractionalization and openness to trade have been used to control for the robustness of the results. These factors are also shown to speed up technology convergence thereby confirming previous empirical studies. Our estimates tackle problems of endogeneity by adopting a variety of techniques, including instrumental variables -for both panel and cross-section analyses- and the two-step efficient dynamics system GMM.
Resumo:
The high sensitivity and excellent timing accuracy of Geiger mode avalanche photodiodes makes them ideal sensors as pixel detectors for particle tracking in high energy physics experiments to be performed in future linear colliders. Nevertheless, it is well known that these sensors suffer from dark counts and afterpulsing noise, which induce false hits (indistinguishable from event detection) as well as an increase of the necessary area of the readout system. In this work, we present a comparison between APDs fabricated in a high voltage 0.35 µm and a high integration 0.13 µm commercially available CMOS technologies that has been performed to determine which of them best fits the particle collider requirements. In addition, a readout circuit that allows low noise operation is introduced. Experimental characterization of the proposed pixel is also presented in this work.
Resumo:
El presente trabajo se centra en estudiar la relación que existe entre el desarrollo de léxico y el de la morfosintaxis. Concretamente pretendemos explorar el tipo de vocabulario que mejor predice el desarrollo de la morfología verbal y el de la complejidad gramatical, así como establecer el tipo de relación entre desarrollo léxico y desarrollo morfosintáctico. La muestra comprende 517 niños de edades comprendidas entre los 18 meses y los 30 meses. Los datos se han recogido a partir de la adaptación al catalán del instrumento MacArthur-Bates Communicative Development Inventories (CDI). Los resultados muestran que el mejor predictor del desarrollo morfológico y gramatical es el vocabulario de clase cerrada, conjuntamente con el vocabulario general. Por otra parte, se observa una relación predominantemente lineal entre el desarrollo del léxico y el desarrollo morfosintáctico
Resumo:
Economies are open complex adaptive systems far from thermodynamic equilibrium, and neo-classical environmental economics seems not to be the best way to describe the behaviour of such systems. Standard econometric analysis (i.e. time series) takes a deterministic and predictive approach, which encourages the search for predictive policy to ‘correct’ environmental problems. Rather, it seems that, because of the characteristics of economic systems, an ex-post analysis is more appropriate, which describes the emergence of such systems’ properties, and which sees policy as a social steering mechanism. With this background, some of the recent empirical work published in the field of ecological economics that follows the approach defended here is presented. Finally, the conclusion is reached that a predictive use of econometrics (i.e. time series analysis) in ecological economics should be limited to cases in which uncertainty decreases, which is not the normal situation when analysing the evolution of economic systems. However, that does not mean we should not use empirical analysis. On the contrary, this is to be encouraged, but from a structural and ex-post point of view.
Resumo:
We consider linear stochastic differential-algebraic equations with constant coefficients and additive white noise. Due to the nature of this class of equations, the solution must be defined as a generalised process (in the sense of Dawson and Fernique). We provide sufficient conditions for the law of the variables of the solution process to be absolutely continuous with respect to Lebesgue measure.
Resumo:
This paper assesses empirically the importance of size discrimination and disaggregate data for deciding where to locate a start-up concern. We compare three econometric specifications using Catalan data: a multinomial logit with 4 and 41 alternatives (provinces and comarques, respectively) in which firm size is the main covariate; a conditional logit with 4 and 41 alternatives including attributes of the sites as well as size-site interactions; and a Poisson model on the comarques and the full spatial choice set (942 municipalities) with site-specific variables. Our results suggest that if these two issues are ignored, conclusions may be misleading. We provide evidence that large and small firms behave differently and conclude that Catalan firms tend to choose between comarques rather than between municipalities. Moreover, labour-intensive firms seem more likely to be located in the city of Barcelona. Keywords: Catalonia, industrial location, multinomial response model. JEL: C250, E30, R00, R12