854 resultados para Residual-Based Panel Cointegration Test
Resumo:
The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.
Resumo:
BACKGROUND: Continual surveillance based on patch test results has proved useful for the identification of contact allergy. OBJECTIVES: To provide a current view on the spectrum of contact allergy to important sensitizers across Europe. PATIENTS/METHODS: Clinical and patch test data of 19 793 patients patch tested in 2005/2006 in the 31 participating departments from 10 European countries (the European Surveillance System on Contact Allergies' (ESSCA) www.essca-dc.org) were descriptively analysed, aggregated to four European regions. RESULTS: Nickel sulfate remains the most common allergen with standardized prevalences ranging from 19.7% (central Europe) to 24.4% (southern Europe). While a number of allergens shows limited variation across the four regions, such as Myroxylon pereirae (5.3-6.8%), cobalt chloride (6.2-8.8%) or thiuram mix (1.7-2.4%), the differences observed with other allergens may hint on underlying differences in exposures, for example: dichromate 2.4% in the UK (west) versus 4.5-5.9% in the remaining EU regions, methylchloroisothiazolinone/methylisothiazolinone 4.1% in the South versus 2.1-2.7% in the remaining regions. CONCLUSIONS: Notwithstanding residual methodological variation (affecting at least some 'difficult' allergens) tackled by ongoing efforts for standardization, a comparative analysis as presented provides (i) a broad overview on contact allergy frequencies and (ii) interesting starting points for further, in-depth investigation.
Resumo:
Considering the importance of the proper detection of bubbles in financial markets for policymakers and market agents, we used two techniques described in Diba and Grossman (1988b) and in Phillips, Shi, and Yu (2015) to detect periods of exuberance in the recent history of the Brazillian stock market. First, a simple cointegration test is applied. Secondly, we conducted several augmented, right-tailed Dickey-Fuller tests on rolling windows of data to determine the point in which there’s a structural break and the series loses its stationarity.
Resumo:
This paper proposes a semiparametric smooth-coefficient (SPSC) stochastic production frontier model where regression coefficients are unknown smooth functions of environmental factors (ZZ). Technical inefficiency is specified in the form of a parametric scaling function which also depends on the ZZ variables. Thus, in our SPSC model the ZZ variables affect productivity directly via the technology parameters as well as through inefficiency. A residual-based bootstrap test of the relevance of the environmental factors in the SPSC model is suggested. An empirical application is also used to illustrate the technique.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
We introduce a residual-based a posteriori error indicator for discontinuous Galerkin discretizations of the biharmonic equation with essential boundary conditions. We show that the indicator is both reliable and efficient with respect to the approximation error measured in terms of a natural energy norm, under minimal regularity assumptions. We validate the performance of the indicator within an adaptive mesh refinement procedure and show its asymptotic exactness for a range of test problems.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
Even though the driving ability of older adults may decline with age, there is evidence that some individuals attempt to compensate for these declines using strategies such as restricting their driving exposure. Such compensatory mechanisms rely on drivers’ ability to evaluate their own driving performance. This paper focuses on one key aspect of driver ability that is associated with crash risk and has been found to decline with age: hazard perception. Three hundred and seven drivers, aged 65 to 96, completed a validated video-based hazard perception test. There was no significant relationship between hazard perception test response latencies and drivers’ ratings of their hazard perception test performance, suggesting that their ability to assess their own test performance was poor. Also, age related declines in hazard perception latency were not reflected in drivers’ self-ratings. Nonetheless, ratings of test performance were associated with self-reported regulation of driving, as was self-rated driving ability. These findings are consistent with the proposal that, while self-assessments of driving ability may be used by drivers to determine the degree to which they restrict their driving, the problem is that drivers have little insight into their own driving ability. This may impact on the potential road safety benefits of self-restriction of driving because drivers may not have the information needed to optimally self-restrict. Strategies for addressing this problem are discussed.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
One strategy that can be used by older drivers to guard against age-related declines in driving capability is to regulate their driving. This strategy presumes that self-judgments of driving capability are realistic. We found no significant relationships between older drivers’ hazard perception skill ratings and performance on an objective and validated video-based hazard perception test, even when self-ratings of performance on specific scenarios in the test were used. Self-enhancement biases were found across all components of driving skill, including hazard perception. If older drivers’ judgments of their driving capability are unrealistic, then this may compromise the effectiveness of any self-restriction strategies to reduce crash risk.
Resumo:
This study investigates the price linkage among the US major energy sources, considering structural breaks in time series, to provide information for diversifying the US energy sources. We find that only a weak linkage sustains among crude oil, gasoline, heating oil, coal, natural gas, uranium and ethanol futures prices. This implies that the US major energy source markets are not integrated as one primary energy market. Our tests also reveal that uranium and ethanol futures prices have very weak linkages with other major energy source prices. This indicates that the US energy market is still at a stage where none of the probable alternative energy source markets are playing the role as substitute or complement markets for the fossil fuel energy markets.
Resumo:
This paper tested the effects of the 2005 vehicle emission-control law issued in Japan on the market linkages between the U.S. and Japanese palladium futures markets, To determine these effects, we applied a cointegration test both with and without break points in the time series and found that the market linkages between the two countries changed after the break in October 2005. Our results show that the 2005 long-term regulation of vehicle emissions enacted in Japan influenced the international palladium futures market.
Resumo:
A comprehensive revision of the Global Burden of Disease (GBD) study is expected to be completed in 2012. This study utilizes a broad range of improved methods for assessing burden, including closer attention to empirically derived estimates of disability. The aim of this paper is to describe how GBD health states were derived for schizophrenia and bipolar disorder. These will be used in deriving health state-specific disability estimates. A literature review was first conducted to settle on a parsimonious set of health states for schizophrenia and bipolar disorder. A second review was conducted to investigate the proportion of schizophrenia and bipolar disorder cases experiencing these health states. These were pooled using a quality-effects model to estimate the overall proportion of cases in each state. The two schizophrenia health states were acute (predominantly positive symptoms) and residual (predominantly negative symptoms). The three bipolar disorder health states were depressive, manic, and residual. Based on estimates from six studies, 63% (38%-82%) of schizophrenia cases were in an acute state and 37% (18%-62%) were in a residual state. Another six studies were identified from which 23% (10%-39%) of bipolar disorder cases were in a manic state, 27% (11%-47%) were in a depressive state, and 50% (30%-70%) were in a residual state. This literature review revealed salient gaps in the literature that need to be addressed in future research. The pooled estimates are indicative only and more data are required to generate more definitive estimates. That said, rather than deriving burden estimates that fail to capture the changes in disability within schizophrenia and bipolar disorder, the derived proportions and their wide uncertainty intervals will be used in deriving disability estimates.
Resumo:
While enhanced cybersecurity options, mainly based around cryptographic functions, are needed overall speed and performance of a healthcare network may take priority in many circumstances. As such the overall security and performance metrics of those cryptographic functions in their embedded context needs to be understood. Understanding those metrics has been the main aim of this research activity. This research reports on an implementation of one network security technology, Internet Protocol Security (IPSec), to assess security performance. This research simulates sensitive healthcare information being transferred over networks, and then measures data delivery times with selected security parameters for various communication scenarios on Linux-based and Windows-based systems. Based on our test results, this research has revealed a number of network security metrics that need to be considered when designing and managing network security for healthcare-specific or non-healthcare-specific systems from security, performance and manageability perspectives. This research proposes practical recommendations based on the test results for the effective selection of network security controls to achieve an appropriate balance between network security and performance