961 resultados para Linear Models in Temporal Series
Resumo:
We consider the issue of assessing influence of observations in the class of Birnbaum-Saunders nonlinear regression models, which is useful in lifetime data analysis. Our results generalize those in Galea et al. [8] which are confined to Birnbaum-Saunders linear regression models. Some influence methods, such as the local influence, total local influence of an individual and generalized leverage are discussed. Additionally, the normal curvatures for studying local influence are derived under some perturbation schemes. We also give an application to a real fatigue data set.
Resumo:
In this paper we obtain asymptotic expansions up to order n(-1/2) for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in exponential family nonlinear models (Cordeiro and Paula, 1989), under a sequence of Pitman alternatives. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the dispersion parameter, thus generalising the results given in Cordeiro et al. (1994) and Ferrari et al. (1997). We also present Monte Carlo simulations in order to compare the finite-sample performance of these tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
OBJECTIVES: To develop a method for objective assessment of fine motor timing variability in Parkinson’s disease (PD) patients, using digital spiral data gathered by a touch screen device. BACKGROUND: A retrospective analysis was conducted on data from 105 subjects including65 patients with advanced PD (group A), 15 intermediate patients experiencing motor fluctuations (group I), 15 early stage patients (group S), and 10 healthy elderly subjects (HE) were examined. The subjects were asked to perform repeated upper limb motor tasks by tracing a pre-drawn Archimedes spiral as shown on the screen of the device. The spiral tracing test was performed using an ergonomic pen stylus, using dominant hand. The test was repeated three times per test occasion and the subjects were instructed to complete it within 10 seconds. Digital spiral data including stylus position (x-ycoordinates) and timestamps (milliseconds) were collected and used in subsequent analysis. The total number of observations with the test battery were as follows: Swedish group (n=10079), Italian I group (n=822), Italian S group (n = 811), and HE (n=299). METHODS: The raw spiral data were processed with three data processing methods. To quantify motor timing variability during spiral drawing tasks Approximate Entropy (APEN) method was applied on digitized spiral data. APEN is designed to capture the amount of irregularity or complexity in time series. APEN requires determination of two parameters, namely, the window size and similarity measure. In our work and after experimentation, window size was set to 4 and similarity measure to 0.2 (20% of the standard deviation of the time series). The final score obtained by APEN was normalized by total drawing completion time and used in subsequent analysis. The score generated by this method is hence on denoted APEN. In addition, two more methods were applied on digital spiral data and their scores were used in subsequent analysis. The first method was based on Digital Wavelet Transform and Principal Component Analysis and generated a score representing spiral drawing impairment. The score generated by this method is hence on denoted WAV. The second method was based on standard deviation of frequency filtered drawing velocity. The score generated by this method is hence on denoted SDDV. Linear mixed-effects (LME) models were used to evaluate mean differences of the spiral scores of the three methods across the four subject groups. Test-retest reliability of the three scores was assessed after taking mean of the three possible correlations (Spearman’s rank coefficients) between the three test trials. Internal consistency of the methods was assessed by calculating correlations between their scores. RESULTS: When comparing mean spiral scores between the four subject groups, the APEN scores were different between HE subjects and three patient groups (P=0.626 for S group with 9.9% mean value difference, P=0.089 for I group with 30.2%, and P=0.0019 for A group with 44.1%). However, there were no significant differences in mean scores of the other two methods, except for the WAV between the HE and A groups (P<0.001). WAV and SDDV were highly and significantly correlated to each other with a coefficient of 0.69. However, APEN was not correlated to neither WAV nor SDDV with coefficients of 0.11 and 0.12, respectively. Test-retest reliability coefficients of the three scores were as follows: APEN (0.9), WAV(0.83) and SD-DV (0.55). CONCLUSIONS: The results show that the digital spiral analysis-based objective APEN measure is able to significantly differentiate the healthy subjects from patients at advanced level. In contrast to the other two methods (WAV and SDDV) that are designed to quantify dyskinesias (over-medications), this method can be useful for characterizing Off symptoms in PD. The APEN was not correlated to none of the other two methods indicating that it measures a different construct of upper limb motor function in PD patients than WAV and SDDV. The APEN also had a better test-retest reliability indicating that it is more stable and consistent over time than WAV and SDDV.
Resumo:
After more than forty years studying growth, there are two classes of growth models that have emerged: exogenous and endogenous growth models. Since both try to mimic the same set of long-run stylized facts, they are observationally equivalent in some respects. Our goals in this paper are twofold First, we discuss the time-series properties of growth models in a way that is useful for assessing their fit to the data. Second, we investigate whether these two models successfully conforms to U.S. post-war data. We use cointegration techniques to estimate and test long-run capital elasticities, exogeneity tests to investigate the exogeneity status of TFP, and Granger-causality tests to examine temporal precedence of TFP with respect to infrastructure expenditures. The empirical evidence is robust in confirming the existence of a unity long-run capital elasticity. The analysis of TFP reveals that it is not weakly exogenous in the exogenous growth model Granger-causality test results show unequivocally that there is no evidence that TFP for both models precede infrastructure expenditures not being preceded by it. On the contrary, we find some evidence that infras- tructure investment precedes TFP. Our estimated impact of infrastructure on TFP lay rougbly in the interval (0.19, 0.27).
Resumo:
In this study, we verify the existence of predictability in the Brazilian equity market. Unlike other studies in the same sense, which evaluate original series for each stock, we evaluate synthetic series created on the basis of linear models of stocks. Following Burgess (1999), we use the “stepwise regression” model for the formation of models of each stock. We then use the variance ratio profile together with a Monte Carlo simulation for the selection of models with potential predictability. Unlike Burgess (1999), we carry out White’s Reality Check (2000) in order to verify the existence of positive returns for the period outside the sample. We use the strategies proposed by Sullivan, Timmermann & White (1999) and Hsu & Kuan (2005) amounting to 26,410 simulated strategies. Finally, using the bootstrap methodology, with 1,000 simulations, we find strong evidence of predictability in the models, including transaction costs.
Resumo:
The past decade has wítenessed a series of (well accepted and defined) financial crises periods in the world economy. Most of these events aI,"e country specific and eventually spreaded out across neighbor countries, with the concept of vicinity extrapolating the geographic maps and entering the contagion maps. Unfortunately, what contagion represents and how to measure it are still unanswered questions. In this article we measure the transmission of shocks by cross-market correlation\ coefficients following Forbes and Rigobon's (2000) notion of shift-contagion,. Our main contribution relies upon the use of traditional factor model techniques combined with stochastic volatility mo deIs to study the dependence among Latin American stock price indexes and the North American indexo More specifically, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. From a theoretical perspective, we improve currently available methodology by allowing the factor loadings, in the factor model structure, to have a time-varying structure and to capture changes in the series' weights over time. By doing this, we believe that changes and interventions experienced by those five countries are well accommodated by our models which learns and adapts reasonably fast to those economic and idiosyncratic shocks. We empirically show that the time varying covariance structure can be modeled by one or two common factors and that some sort of contagion is present in most of the series' covariances during periods of economical instability, or crisis. Open issues on real time implementation and natural model comparisons are thoroughly discussed.
Resumo:
Increasing competition caused by globalization, high growth of some emerging markets and stagnation of developed economies motivate Consumer Packaged Goods (CPGs) manufacturers to drive their attention to emerging markets. These companies are expected to adapt their marketing activities to the particularities of these markets in order to succeed. In a country classified as emerging market, regions are not alike and some contrasts can be identified. In addition, divergences of marketing variables effect can also be observed in the different retail formats. The retail formats in emerging markets can be segregated in chain self-service and traditional full-service. Thus, understanding the effectiveness of marketing mix not only in country aggregated level data can be an important contribution. Inasmuch as companies aim to generate profits from emerging markets, price is an important marketing variable in the process of creating competitive advantage. Along with price, promotional variables such as in-store displays and price cut are often viewed as temporary incentives to increase short-term sales. Managers defend the usage of promotions as being the most reliable and fastest manner to increase sales and then short-term profits. However, some authors alert about sales promotions disadvantages; mainly in the long-term. This study investigates the effect of price and in-store promotions on sales volume in different regions within an emerging market. The database used is at SKU level for juice, being segregated in the Brazilian northeast and southeast regions and corresponding to the period from January 2011 to January 2013. The methodological approach is descriptive quantitative involving validation tests, application of multivariate and temporal series analysis method. The Vector-Autoregressive (VAR) model was used to perform the analysis. Results suggest similar price sensitivity in the northeast and southeast region and greater in-store promotion sensitivity in the northeast. Price reductions show negative results in the long-term (persistent sales in six months) and in-store promotion, positive results. In-store promotion shows no significant influence on sales in chain self-service stores while price demonstrates no relevant impact on sales in traditional full-service stores. Hence, this study contributes to the business environment for companies wishing to manage price and sales promotions for consumer brands in regions with different features within an emerging market. As a theoretical contribution, this study fills an academic gap providing a dedicated price and sales promotion study to contrast regions in an emerging market.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The separation methods are reduced applications as a result of the operational costs, the low output and the long time to separate the uids. But, these treatment methods are important because of the need for extraction of unwanted contaminants in the oil production. The water and the concentration of oil in water should be minimal (around 40 to 20 ppm) in order to take it to the sea. Because of the need of primary treatment, the objective of this project is to study and implement algorithms for identification of polynomial NARX (Nonlinear Auto-Regressive with Exogenous Input) models in closed loop, implement a structural identification, and compare strategies using PI control and updated on-line NARX predictive models on a combination of three-phase separator in series with three hydro cyclones batteries. The main goal of this project is to: obtain an optimized process of phase separation that will regulate the system, even in the presence of oil gushes; Show that it is possible to get optimized tunings for controllers analyzing the mesh as a whole, and evaluate and compare the strategies of PI and predictive control applied to the process. To accomplish these goals a simulator was used to represent the three phase separator and hydro cyclones. Algorithms were developed for system identification (NARX) using RLS(Recursive Least Square), along with methods for structure models detection. Predictive Control Algorithms were also implemented with NARX model updated on-line, and optimization algorithms using PSO (Particle Swarm Optimization). This project ends with a comparison of results obtained from the use of PI and predictive controllers (both with optimal state through the algorithm of cloud particles) in the simulated system. Thus, concluding that the performed optimizations make the system less sensitive to external perturbations and when optimized, the two controllers show similar results with the assessment of predictive control somewhat less sensitive to disturbances
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)