123 resultados para 140304 Panel Data Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines the effects of zero trade on the estimation of the gravity model using both simulated and real data with a panel structure, which is different from the more conventional cross-sectional structure. We begin by showing that the usual log-linear estimation method can result in highly deceptive inference when some observations are zero. As an alternative approach, we suggest using the poisson fixed effects estimator. This approach eliminates the problems of zero trade, controls for heterogeneity across countries, and is shown to perform well in small samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airport baggage handling systems are a critical infrastructure component within major airports, and essential to ensure smooth luggage transfer while preventing dangerous material being loaded onto aircraft. This paper proposes a standard set of measures to assess the expected performance of a baggage handling system through discrete event simulation. These evaluation methods also have application in the study of general network systems. Results from the application of these methods reveal operational characteristics of the studied BHS, in terms of metrics such as peak throughput, in-system time and system recovery time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ordinal data is omnipresent in almost all multiuser-generated feedback - questionnaires, preferences etc. This paper investigates modelling of ordinal data with Gaussian restricted Boltzmann machines (RBMs). In particular, we present the model architecture, learning and inference procedures for both vector-variate and matrix-variate ordinal data. We show that our model is able to capture latent opinion profile of citizens around the world, and is competitive against state-of-art collaborative filtering techniques on large-scale public datasets. The model thus has the potential to extend application of RBMs to diverse domains such as recommendation systems, product reviews and expert assessments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global positioning system (GPS) technology has improved the speed, accuracy, and ease of time-motion analyses of field sport athletes. The large volume of numerical data generated by GPS technology is usually summarized by reporting the distance traveled and time spent in various locomotor categories (e.g., walking, jogging, and running). There are a variety of definitions used in the literature to represent these categories, which makes it nearly impossible to compare findings among studies.

The purpose of this work was to propose standard definitions (velocity ranges) that were determined by an objective analysis of time-motion data. In addition, we discuss the limitations of the existing definition of a sprint and present a new definition of sprinting for field sport athletes.

Twenty-five GPS data files collected from 5 different sports (men’s and women’s field hockey, men’s and women’s soccer, and Australian Rules Football) were analyzed to identify the average velocity distribution. A curve fitting process was then used to determine the optimal placement of 4 Gaussian curves representing the typical locomotor categories. 


Based on the findings of these analyses, we make recommendations about sport- specific velocity ranges to be used in future time-motion studies of field sport athletes. We also suggest that a sprint be defined as any movement that reaches or exceeds the sprint threshold velocity for at least 1 second and any movement with an acceleration that occurs within the highest 5% of accelerations found in the corresponding velocity range.

From a practical perspective, these analyses provide conditioning coaches with information on the high-intensity sprinting demands of field sport athletes, while also providing a novel method of capturing maximal effort, short-duration sprints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A rapid analytical approach for discrimination and quantitative determination of polyunsaturated fatty acid (PUFA) contents, particularly eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), in a range of oils extracted from marine resources has been developed by using attenuated total reflection Fourier transform infrared spectroscopy and multivariate data analysis. The spectral data were collected without any sample preparation; thus, no chemical preparation was involved, but data were rather processed directly using the developed spectral analysis platform, making it fast, very cost effective, and suitable for routine use in various biotechnological and food research and related industries. Unsupervised pattern recognition techniques, including principal component analysis and unsupervised hierarchical cluster analysis, discriminated the marine oils into groups by correlating similarities and differences in their fatty acid (FA) compositions that corresponded well to the FA profiles obtained from traditional lipid analysis based on gas chromatography (GC). Furthermore, quantitative determination of unsaturated fatty acids, PUFAs, EPA and DHA, by partial least square regression analysis through which calibration models were optimized specifically for each targeted FA, was performed in both known marine oils and totally independent unknown n - 3 oil samples obtained from an actual commercial product in order to provide prospective testing of the developed models towards actual applications. The resultant predicted FAs were achieved at a good accuracy compared to their reference GC values as evidenced through (1) low root mean square error of prediction, (2) good coefficient of determination close to 1 (i.e., R 2≥ 0.96), and (3) the residual predictive deviation values that indicated the predictive power at good and higher levels for all the target FAs. © 2014 Springer Science+Business Media New York.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High Performance Computing (HPC) clouds have started to change the way how research in science, in particular medicine and genomics (bioinformatics) is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. However, most HPC clouds are provided at an Infrastructure as a Service (IaaS) level, users are presented with a set of virtual servers which need to be put together to form HPC environments via time consuming resource management and software configuration tasks, which make them practically unusable by discipline, non-computing specialists. In response, there is a new trend to expose cloud applications as services to simplify access and execution on clouds. This paper firstly examines commonly used cloud-based genomic analysis services (Tuxedo Suite, Galaxy and Cloud Bio Linux). As a follow up, we propose two new solutions (HPCaaS and Uncinus), which aim to automate aspects of the service development and deployment process. By comparing and contrasting these five solutions, we identify key mechanisms of service creation, execution and access that are required to support genomic research on the SaaS cloud, in particular by discipline specialists. © 2014 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the single most cited studies within the field of nonstationary panel data analysis is that of LLC (Levin et al. in J Econom 98:1 - 24, 2002), in which the authors propose a test for a common unit root in the panel. Using both theoretical arguments and simulation evidence, we show that this test can be misleading unless it is based on the same bandwidth selection rule used by LLC. © Springer-Verlag 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common explanation for the inability of the monetary model to beat the random walk in forecasting future exchange rates is that conventional time series tests may have low power, and that panel data should generate more powerful tests. This paper provides an extensive evaluation of this power argument to the use of panel data in the forecasting context. In particular, by using simulations it is shown that although pooling of the individual prediction tests can lead to substantial power gains, pooling only the parameters of the forecasting equation, as has been suggested in the previous literature, does not seem to generate more powerful tests. The simulation results are illustrated through an empirical application. Copyright © 2007 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes new error correction-based cointegration tests for panel data. The limiting distributions of the tests are derived and critical values provided. Our simulation results suggest that the tests have good small-sample properties with small size distortions and high power relative to other popular residual-based panel cointegration tests. In our empirical application, we present evidence suggesting that international healthcare expenditures and GDP are cointegrated once the possibility of an invalid common factor restriction has been accounted for. © 2007 Blackwell Publishing Ltd.