885 resultados para finite-sample test
Resumo:
Seven sites were drilled during Leg 67 along a transect across the Middle America Trench off Guatemala: four (Sites 494, 496, 497, and 498) on continental slope, two (Sites 499 and 500) on Trench floor, and one (Site 495) on the Cocos Plate. We studied the mineralogy of sediments from Sites 494, 495, 496, 499, and 500. Our objective was to investigate the origin and source of separate minerals and mineral assemblages, giving special attention to the influence of the alteration of basalts on the sediment mineralogy, which we expected to be particularly important in layers just above oceanic basement.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
Les enjeux hydrologiques modernes, de prévisions ou liés aux changements climatiques, forcent l’exploration de nouvelles approches en modélisation afin de combler les lacunes actuelles et d’améliorer l’évaluation des incertitudes. L’approche abordée dans ce mémoire est celle du multimodèle (MM). L’innovation se trouve dans la construction du multimodèle présenté dans cette étude : plutôt que de caler individuellement des modèles et d’utiliser leur combinaison, un calage collectif est réalisé sur la moyenne des 12 modèles globaux conceptuels sélectionnés. Un des défis soulevés par cette approche novatrice est le grand nombre de paramètres (82) qui complexifie le calage et l’utilisation, en plus d’entraîner des problèmes potentiels d’équifinalité. La solution proposée dans ce mémoire est une analyse de sensibilité qui permettra de fixer les paramètres peu influents et d’ainsi réduire le nombre de paramètres total à caler. Une procédure d’optimisation avec calage et validation permet ensuite d’évaluer les performances du multimodèle et de sa version réduite en plus d’en améliorer la compréhension. L’analyse de sensibilité est réalisée avec la méthode de Morris, qui permet de présenter une version du MM à 51 paramètres (MM51) tout aussi performante que le MM original à 82 paramètres et présentant une diminution des problèmes potentiels d’équifinalité. Les résultats du calage et de la validation avec le « Split-Sample Test » (SST) du MM sont comparés avec les 12 modèles calés individuellement. Il ressort de cette analyse que les modèles individuels, composant le MM, présentent de moins bonnes performances que ceux calés indépendamment. Cette baisse de performances individuelles, nécessaire pour obtenir de bonnes performances globales du MM, s’accompagne d’une hausse de la diversité des sorties des modèles du MM. Cette dernière est particulièrement requise pour les applications hydrologiques nécessitant une évaluation des incertitudes. Tous ces résultats mènent à une amélioration de la compréhension du multimodèle et à son optimisation, ce qui facilite non seulement son calage, mais également son utilisation potentielle en contexte opérationnel.
Resumo:
International audience
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Departamento de Botânica, Programa de Pós-Graduação em Botânica, 2016.
Resumo:
Dada la persistencia de las diferencias en ingresos laborales por regiones en Colombia, el presente artículo propone cuantificar la magnitud de este diferencial que es atribuida a la diferencia en estructuras de mercado laboral, entendiendo esta última como la diferencia en los retornos a las características de la fuerza laboral. Para ello se propone el uso de un método de descomposición del tipo Oaxaca- Blinder y se compara a Bogotá –la ciudad con mayores ingresos laborales- con otras ciudades principales. Los resultados obtenidos al conducir el ejercicio de descomposición muestran que las diferencias en estructura están a favor de Bogotá y que estas explican más de la mitad de la diferencia total, indicando que si se quieren reducir las disparidades de ingresos laborales entre ciudades no es suficiente con calificar la fuerza laboral y que es necesario indagar por las causas que hacen que los retornos a las características difieran entre ciudades.
Resumo:
In the context of multivariate regression (MLR) and seemingly unrelated regressions (SURE) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. in this paper, we propose finite-and large-sample likelihood-based test procedures for possibly non-linear hypotheses on the coefficients of MLR and SURE systems.
Resumo:
This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.
Resumo:
Objectives. To investigate the test-retest stability of a standardized version of Nelson's (1976) Modified Card Sorting Test (MCST) and its relationships with demographic variables in a sample of healthy older adults. Design. A standard card order and administration were devised for the MCST and administered to participants at an initial assessment, and again at a second session conducted a minimum of six months later in order to examine its test-retest stability. Participants were also administered the WAIS-R at initial assessment in order to provide a measure of psychometric intelligence. Methods. Thirty-six (24 female, 12 male) healthy older adults aged 52 to 77 years with mean education 12.42 years (SD = 3.53) completed the MCST on two occasions approximately 7.5 months (SD = 1.61) apart. Stability coefficients and test-retest differences were calculated for the range of scores. The effect of gender on MCST performance was examined. Correlations between MCST scores and age, education and WAIS-R IQs were also determined. Results. Stability coefficients ranged from .26 for the percent perseverative errors measure to .49 for the failure to maintain set measure. Several measures were significantly correlated with age, education and WAIS-R IQs, although no effect of gender on MCST performance was found. Conclusions. None of the stability coefficients reached the level required for clinical decision making. The results indicate that participants' age, education, and intelligence need to be considered when interpreting MCST performance. Normative studies of MCST performance as well as further studies with patients with executive dysfunction are needed.
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
Investigations of the factor structure of the Alcohol Use Disorders Identification Test (AUDIT) have produced conflicting results. The current study assessed the factor structure of the AUDIT for a group of Mentally Disordered Offenders (MDOs) and examined the pattern of scoring in specific subgroups. The sample comprised 2005 MDOs who completed a battery of tests including the AUDIT. Confirmatory factor analyses revealed that a two-factor solution – alcohol consumption and alcohol-related consequences – provided the best data fit for AUDIT scores. A three-factor solution provided an equally good fit, but the second and third factors were highly correlated and a measure of parsimony also favoured the two-factor solution. This study provides useful information on the factor structure of the AUDIT amongst a large MDO population, while also highlighting the difficulties associated with the presence of people with mental health problems in the criminal justice system.