898 resultados para Residual-Based Panel Cointegration Test


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: A test battery consisting of self-assessments and motor tests (tapping and spiral drawing) was developed for a hand computer with touch screen in a telemedicine setting. Objectives: To develop and evaluate a web-based system that delivers decision support information to the treating clinical staff for assessing PD symptoms in their patients based on the test battery data. Methods: The test battery is currently being used in a clinical trial (DAPHNE, EudraCT No. 2005-002654-21) by sixty five patients with advanced Parkinson’s disease (PD) on 9991 test occasions (four tests per day during in all 362 week-long test periods) at nine clinics around Sweden. Test results are sent continuously from the hand unit over a mobile net to a central computer and processed with statistical methods. They are summarized into scores for different dimensions of the symptom state and an ‘overall test score’ reflecting the overall condition of the patient during a test period. The information in the web application is organized and presented graphically in a way that the general overview of the patient performance per test period is emphasized. Focus is on the overall test score, symptom dimensions and daily summaries. In a recent preliminary user evaluation, the web application was demonstrated to the fifteen study nurses who had used the test battery in the clinical trial. At least one patient per clinic was shown. Results: In general, the responses from nurses were positive. They claimed that the test results shown in the system were consistent with their own clinical observations. They could follow complications, changes and trends within their patients. Discussion: In conclusion, the system is able to summarise the various time series of motor test results and self-assessments during test periods and present them in a useful manner. Its main contribution is a novel and reliable way to capture and easily access symptom information from patients’ home environment. The convenient access to current symptom profile as well as symptom history provides a basis for individualized evaluation and adjustment of treatments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Produtos estruturados é uma combinação de ativos que inclui uma renda fixa e um ou mais derivativos embutidos. No Brasil, como ainda não existe uma regulamentação específica como nos Estados Unidos e Europa, a comercialização destes produtos é feita, principalmente, via Fundos de Investimentos Estruturados. O objetivo deste trabalho é avaliar se existe uma sobrevalorização na emissão de Fundos de Investimentos Estruturados. Para isso, calculou-se a diferença entre o preço de emissão e o preço teórico. Este preço teórico foi calculado sintetizando uma carteira composta de um componente renda fixa e os derivativos embutidos, valorizando-se os dois componentes com base na mesma metodologia abordada em publicações nacionais e internacionais. Foram analisados 40 fundos de Investimentos Fechados com emissão entre 2006 e 2011, observando-se que há indícios de uma diferença de preços, conclusão similar aos demais trabalhos que analisaram o tema. Esta diferença de preços encontrada pode ser explicada pelos custos de desenvolvimento dos produtos, pelos custos de hedge das operações e pelo fato dos pequenos investidores não terem acesso a este mercado diretamente. Adicionalmente, analisou-se a existência de uma relação de longo prazo entre as variáveis volatilidade e a diferença de preços encontrada. Através do Teste de Cointegração foi observado que existe uma tendência de longo prazo entre as variáveis. A Decomposição das Variâncias demonstra que as variações de margem são explicadas pelas variações na volatilidade e, por fim, o Teste da Causalidade de Granger indica que as variações da margem precedem as variações da volatilidade estimada. Com este resultado, espera-se contribuir para aumentar a transparência do mercado ao ilustrar a sofisticação das estruturas e, também, contribuir para o debate nas discussões sobre a nova regulamentação dos produtos estruturados que o Banco Central está em via de definir.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a residual based test where the null hypothesis of c:&InOvement between two processes with local persistenc~ can be tested, even under the presence of an endogenous regressor. It, therefore, fills in an existing lacuna in econometrics, in which longrun relationships can also be tested if the dependent and independent variables do not have a unit root, but do exhibit local persistence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Continual surveillance based on patch test results has proved useful for the identification of contact allergy. OBJECTIVES: To provide a current view on the spectrum of contact allergy to important sensitizers across Europe. PATIENTS/METHODS: Clinical and patch test data of 19 793 patients patch tested in 2005/2006 in the 31 participating departments from 10 European countries (the European Surveillance System on Contact Allergies' (ESSCA) www.essca-dc.org) were descriptively analysed, aggregated to four European regions. RESULTS: Nickel sulfate remains the most common allergen with standardized prevalences ranging from 19.7% (central Europe) to 24.4% (southern Europe). While a number of allergens shows limited variation across the four regions, such as Myroxylon pereirae (5.3-6.8%), cobalt chloride (6.2-8.8%) or thiuram mix (1.7-2.4%), the differences observed with other allergens may hint on underlying differences in exposures, for example: dichromate 2.4% in the UK (west) versus 4.5-5.9% in the remaining EU regions, methylchloroisothiazolinone/methylisothiazolinone 4.1% in the South versus 2.1-2.7% in the remaining regions. CONCLUSIONS: Notwithstanding residual methodological variation (affecting at least some 'difficult' allergens) tackled by ongoing efforts for standardization, a comparative analysis as presented provides (i) a broad overview on contact allergy frequencies and (ii) interesting starting points for further, in-depth investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considering the importance of the proper detection of bubbles in financial markets for policymakers and market agents, we used two techniques described in Diba and Grossman (1988b) and in Phillips, Shi, and Yu (2015) to detect periods of exuberance in the recent history of the Brazillian stock market. First, a simple cointegration test is applied. Secondly, we conducted several augmented, right-tailed Dickey-Fuller tests on rolling windows of data to determine the point in which there’s a structural break and the series loses its stationarity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a semiparametric smooth-coefficient (SPSC) stochastic production frontier model where regression coefficients are unknown smooth functions of environmental factors (ZZ). Technical inefficiency is specified in the form of a parametric scaling function which also depends on the ZZ variables. Thus, in our SPSC model the ZZ variables affect productivity directly via the technology parameters as well as through inefficiency. A residual-based bootstrap test of the relevance of the environmental factors in the SPSC model is suggested. An empirical application is also used to illustrate the technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a residual-based a posteriori error indicator for discontinuous Galerkin discretizations of the biharmonic equation with essential boundary conditions. We show that the indicator is both reliable and efficient with respect to the approximation error measured in terms of a natural energy norm, under minimal regularity assumptions. We validate the performance of the indicator within an adaptive mesh refinement procedure and show its asymptotic exactness for a range of test problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pulp lifters, also known, as pan lifters are an integral part of the majority of autogenous (AG), semi-autogenous (SAG) and grate discharge ball mills. The performance of the pulp lifters in conjunction with grate design determines the ultimate flow capacity of these mills. Although the function of the pulp lifters is simply to transport the slurry passed through the discharge grate into the discharge trunnion, their performance depends on their design as well as that of the grate and operating conditions such as mill speed and charge level. However, little or no work has been reported on the performance of grate-pulp lifter assemblies and in particular the influence of pulp lifter design on slurry transport. Ideally, the discharge rate through a grate-pulp lifter assembly should be equal to the discharge rate through at a given mill hold-up. However, the results obtained have shown that conventional pulp lifter designs cause considerable restrictions to flow resulting in reduced flow capacity. In this second of a two-part series of papers the performance of conventional pulp lifters (radial and spiral designs) is described and is based on extensive test work carried out in a I m diameter pilot SAG mill. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract — The analytical methods based on evaluation models of interactive systems were proposed as an alternative to user testing in the last stages of the software development due to its costs. However, the use of isolated behavioral models of the system limits the results of the analytical methods. An example of these limitations relates to the fact that they are unable to identify implementation issues that will impact on usability. With the introduction of model-based testing we are enable to test if the implemented software meets the specified model. This paper presents an model-based approach for test cases generation from the static analysis of source code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One hundred and twenty subjects with Chagas' cardiopathy and 120 non-infected subjects were randomly selected from first time claimants of sickness benefits in the National Institute of Social Security (INPS) in Goiás. Cases of Chagas' cardiopathy were defined based on serological test, history of residence in an endemic area and, clinical and/or electrocardiogram (ECG) alterations suggestive of Chagas' cardiomyopathy. Controls were defined as subjects with at least two negative serological tests. Case and controls were compared in the analysis for age, sex, place of birth, migration history, socio-economic level, occupation, physical exertion at work, age at affiliation and years of contribution to the social security scheme, clinical course of their disease and ECG abnormalities. Chagas' disease patients were younger than other subjects and predominantly of rural origin. Non-infected subjects presented a better socio-economic level, were performing more skilled activities and had less changes of job than cases. No important difference was observed in relation to age at affiliation to INPS. About 60% of cases have claimed for benefits within the first four years of contribution while among controls this proportion was 38.5%. Cases were involved, proportionally more than controls, in "heavy" activities. A risk of 2.3 (95%CL 1.5 - 4.6) and 1.8 (95%CL 1.2- 3.5) was obtained comparing respectively "heavy" and "moderate" physical activity against "light". A relative risk of 8.5 (95%CL 4.9 - 14.8) associated with the presence of cardiopathy was estimated comparing the initial sample of seropositive subjects and controls. A high relative risk was observed in relation to right bundle branch block (RR = 37.1 95%CL = 8.8 - 155.6) and left anterior hemiblock (RR = 4.4, 95%CL = 2.1 - 9.1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Engenharia Química