927 resultados para Time-series Analysis
Resumo:
Objective: To develop a method for objective quantification of PD motor symptoms related to Off episodes and peak dose dyskinesias, using spiral data gathered by using a touch screen telemetry device. The aim was to objectively characterize predominant motor phenotypes (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Background: A retrospective analysis was conducted on recordings from 65 patients with advanced idiopathic PD from nine different clinics in Sweden, recruited from January 2006 until August 2010. In addition to the patient group, 10 healthy elderly subjects were recruited. Upper limb movement data were collected using a touch screen telemetry device from home environments of the subjects. Measurements with the device were performed four times per day during week-long test periods. On each test occasion, the subjects were asked to trace pre-drawn Archimedean spirals, using the dominant hand. The pre-drawn spiral was shown on the screen of the device. The spiral test was repeated three times per test occasion and they were instructed to complete it within 10 seconds. The device had a sampling rate of 10Hz and measured both position and time-stamps (in milliseconds) of the pen tip. Methods: Four independent raters (FB, DH, AJ and DN) used a web interface that animated the spiral drawings and allowed them to observe different kinematic features during the drawing process and to rate task performance. Initially, a number of kinematic features were assessed including ‘impairment’, ‘speed’, ‘irregularity’ and ‘hesitation’ followed by marking the predominant motor phenotype on a 3-category scale: tremor, bradykinesia and/or choreatic dyskinesia. There were only 2 test occasions for which all the four raters either classified them as tremor or could not identify the motor phenotype. Therefore, the two main motor phenotype categories were bradykinesia and dyskinesia. ‘Impairment’ was rated on a scale from 0 (no impairment) to 10 (extremely severe) whereas ‘speed’, ‘irregularity’ and ‘hesitation’ were rated on a scale from 0 (normal) to 4 (extremely severe). The proposed data-driven method consisted of the following steps. Initially, 28 spatiotemporal features were extracted from the time series signals before being presented to a Multilayer Perceptron (MLP) classifier. The features were based on different kinematic quantities of spirals including radius, angle, speed and velocity with the aim of measuring the severity of involuntary symptoms and discriminate between PD-specific (bradykinesia) and/or treatment-induced symptoms (dyskinesia). A Principal Component Analysis was applied on the features to reduce their dimensions where 4 relevant principal components (PCs) were retained and used as inputs to the MLP classifier. Finally, the MLP classifier mapped these components to the corresponding visually assessed motor phenotype scores for automating the process of scoring the bradykinesia and dyskinesia in PD patients whilst they draw spirals using the touch screen device. For motor phenotype (bradykinesia vs. dyskinesia) classification, the stratified 10-fold cross validation technique was employed. Results: There were good agreements between the four raters when rating the individual kinematic features with intra-class correlation coefficient (ICC) of 0.88 for ‘impairment’, 0.74 for ‘speed’, 0.70 for ‘irregularity’, and moderate agreements when rating ‘hesitation’ with an ICC of 0.49. When assessing the two main motor phenotype categories (bradykinesia or dyskinesia) in animated spirals the agreements between the four raters ranged from fair to moderate. There were good correlations between mean ratings of the four raters on individual kinematic features and computed scores. The MLP classifier classified the motor phenotype that is bradykinesia or dyskinesia with an accuracy of 85% in relation to visual classifications of the four movement disorder specialists. The test-retest reliability of the four PCs across the three spiral test trials was good with Cronbach’s Alpha coefficients of 0.80, 0.82, 0.54 and 0.49, respectively. These results indicate that the computed scores are stable and consistent over time. Significant differences were found between the two groups (patients and healthy elderly subjects) in all the PCs, except for the PC3. Conclusions: The proposed method automatically assessed the severity of unwanted symptoms and could reasonably well discriminate between PD-specific and/or treatment-induced motor symptoms, in relation to visual assessments of movement disorder specialists. The objective assessments could provide a time-effect summary score that could be useful for improving decision-making during symptom evaluation of individualized treatment when the goal is to maximize functional On time for patients while minimizing their Off episodes and troublesome dyskinesias.
Resumo:
A challenge for the clinical management of advanced Parkinson’s disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good test-retest reliability. This study demonstrated the potential of using digital spiral analysis for objective quantification of PD-specific and/or treatment-induced motor symptoms.
Resumo:
Using national accounts data for the revenue-GDP and expenditure GDP ratios from 1947 to 1992, we examine two central issues in public finance. First, was the path of public debt sustainable during this period? Second, if debt is sustainable, how has the government historically balanced the budget after hocks to either revenues or expenditures? The results show that (i) public deficit is stationary (bounded asymptotic variance), with the budget in Brazil being balanced almost entirely through changes in taxes, regardless of the cause of the initial imbalance. Expenditures are weakly exogenous, but tax revenues are not;(ii) a rational Brazilian consumer can have a behavior consistent with Ricardian Equivalence (iii) seignorage revenues are critical to restore intertemporal budget equilibrium, since, when we exclude them from total revenues, debt is not sustainable in econometric tests.
Resumo:
The aim of this paper is to provide evidence on output convergence among the Mercosur countries and associates, using multivariate time-series tests. The methodology is based on a combination of tests and estimation procedures, both univariate and multivariate, applied to the differences in per capita real income. We use the definitions of time-series convergence proposed by Bernard & Durlauf and apply unit root and tests proposed by Abuaf & Jorion and Taylor & Sarno. In this same multivariate context, the Flôres, Preumont & Szafarz and Breuer, MbNown & Wallace tests, which allow for the existence of correlations across the series without imposing a common speed of mean reversion, identify the countries that convergence. Concerning the empirical results, there is evidence of long-run convergence or, at least, catching up, for the smaller countries, Bolivia, Paraguay, Peru and Uruguay, towards Brazil and, to some extent, Argentina. In contrast, the evidence on convergence for the larger countries is weaker, as they have followed different (or rather opposing) macroeconomic policy strategies. Thus the future of the whole area will critically depend on the ability of Brazil, Argentina and Chile to find some scope for more cooperative policy actions.
Resumo:
Initial endogenous growth models emphasized the importance of external effects and increasing retums in explaining growth. Empirically, this hypothesis can be confumed if the coefficient of physical capital per hour is unity in the aggregate production function. Previous estimates using time series data rejected this hypothesis, although cross-country estimates did nol The problem lies with the techniques employed, which are unable to capture low-frequency movements of high-frequency data. Using cointegration, new time series evidence confum the theory and conform to cross-country evidence. The implied Solow residual, which takes into account externaI effects to aggregate capital, has its behavior analyzed. The hypothesis that it is explained by government expenditures on infrasttucture is confIrmed. This suggests a supply-side role for government affecting productivity.
Resumo:
While it is recognized that output fuctuations are highly persistent over certain range, less persistent results are also found around very long horizons (Conchrane, 1988), indicating the existence of local or temporary persistency. In this paper, we study time series with local persistency. A test for stationarity against locally persistent alternative is proposed. Asymptotic distributions of the test statistic are provided under both the null and the alternative hypothesis of local persistency. Monte Carlo experiment is conducted to study the power and size of the test. An empirical application reveals that many US real economic variables may exhibit local persistency.
Resumo:
Chambers (1998) explores the interaction between long memory and aggregation. For continuous-time processes, he takes the aliasing effect into account when studying temporal aggregation. For discrete-time processes, however, he seems to fail to do so. This note gives the spectral density function of temporally aggregated long memory discrete-time processes in light of the aliasing effect. The results are different from those in Chambers (1998) and are supported by a small simulation exercise. As a result, the order of aggregation may not be invariant to temporal aggregation, specifically if d is negative and the aggregation is of the stock type.
Resumo:
Using national accounts data for the revenue-GDP and expenditureGDP ratios from 1947 to 1992, we examine three central issues in public finance. First, was the path of public debt sustainable during this period? Second, if debt is sustainable, how has the government historically balanced the budget after shocks to either revenues or expenditures? Third, are expenditures exogenous? The results show that (i) public deficit is stationary (bounded asymptotic variance), with the budget in Brazil being balanced almost entirely through changes in taxes, regardless of the cause of the initial imbalance. Expenditures are weakly exogenous, but tax revenues are not; (ii) the behavior of a rational Brazilian consumer may be consistent with Ricardian Equivalence; (iii) seigniorage revenues are critical to restore intertemporal budget equilibrium, since, when we exclude them from total revenues, debt is not sustainable in econometric tests.
Resumo:
It is well known that cointegration between the level of two variables (e.g. prices and dividends) is a necessary condition to assess the empirical validity of a present-value model (PVM) linking them. The work on cointegration,namelyon long-run co-movements, has been so prevalent that it is often over-looked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. This amounts to investigate whether short-run co-movememts steming from common cyclical feature restrictions are also present in such a system. In this paper we test for the presence of such co-movement on long- and short-term interest rates and on price and dividend for the U.S. economy. We focuss on the potential improvement in forecasting accuracies when imposing those two types of restrictions coming from economic theory.
Resumo:
This paper has two original contributions. First, we show that the present value model (PVM hereafter), which has a wide application in macroeconomics and fi nance, entails common cyclical feature restrictions in the dynamics of the vector error-correction representation (Vahid and Engle, 1993); something that has been already investigated in that VECM context by Johansen and Swensen (1999, 2011) but has not been discussed before with this new emphasis. We also provide the present value reduced rank constraints to be tested within the log-linear model. Our second contribution relates to forecasting time series that are subject to those long and short-run reduced rank restrictions. The reason why appropriate common cyclical feature restrictions might improve forecasting is because it finds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to present value restrictions, i.e. the online series maintained and up-dated by Shiller. We focus on three different data sets. The fi rst includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Moreover, imposing short-run restrictions produce forecast winners 70% of the time for target variables of PVMs and 63.33% of the time when all variables in the system are considered.
Resumo:
Using a sequence of nested multivariate models that are VAR-based, we discuss different layers of restrictions imposed by present-value models (PVM hereafter) on the VAR in levels for series that are subject to present-value restrictions. Our focus is novel - we are interested in the short-run restrictions entailed by PVMs (Vahid and Engle, 1993, 1997) and their implications for forecasting. Using a well-known database, kept by Robert Shiller, we implement a forecasting competition that imposes different layers of PVM restrictions. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to the unrestricted VAR. Moreover, imposing short-run restrictions produces forecast winners 70% of the time for the target variables of PVMs and 63.33% of the time when all variables in the system are considered.
Resumo:
Researchers often rely on the t-statistic to make inference on parameters in statistical models. It is common practice to obtain critical values by simulation techniques. This paper proposes a novel numerical method to obtain an approximately similar test. This test rejects the null hypothesis when the test statistic islarger than a critical value function (CVF) of the data. We illustrate this procedure when regressors are highly persistent, a case in which commonly-used simulation methods encounter dificulties controlling size uniformly. Our approach works satisfactorily, controls size, and yields a test which outperforms the two other known similar tests.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)