941 resultados para test case optimization
Resumo:
This paper proposes a new time-domain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between long-memory and structural breaks. Denoting by AB(t) the different types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the t-ratio (or the infimum of a sequence of t-ratios) of the estimated coefficient on yt-1 in an OLS regression of ?dyt on a simple transformation of the above-mentioned deterministic components and yt-1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SB-FDF (Structural Break-Fractional Dickey- Fuller) test, since it is based on the same principles as the well-known Dickey-Fuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided.
Resumo:
Rapport de synthèse : L'article qui fait l'objet de ma thèse évalue une nouvelle approche pédagogique pour l'apprentissage de certains chapitres de physiopathologie. Le dispositif pédagogique se base sur l'alternance d'apprentissage ex-cathedra et de l'utilisation d'un site web comprenant des vignettes cliniques. Lors de la consultation de ces-dernières, l'étudiant est invité à demander des examens de laboratoire dont il pourrait justifier la pertinence selon le cas clinique étudié. La nouveauté du procédé réside dans le fait que, préalablement à son cours ex-cathedra, l'enseignant peut consulter les statistiques de demandes de laboratoire et ainsi orienter son cours selon les éléments mal compris par les étudiants. A la suite du cours ex-cathedra, les étudiants peuvent consulter sur internet la vignette clinique complète avec des explications. A l'issue de tout le cours, une évaluation auprès des étudiants a été conduite. Le procédé a été mis en place durant deux années consécutives et l'article en discute notamment les résultats. Nous avons pu conclure que cette méthode innovatrice d'enseignement amène les étudiants à mieux se préparer pour les cours ex-cathedra tout en permettant à l'enseignant d'identifier plus précisément quelles thématiques étaient difficiles pour les étudiants et donc d'ajuster au mieux son cours. Mon travail de thèse a consisté à créer ce dispositif d'apprentissage, à créer l'application web des vignettes cliniques et à l'implanter durant deux années consécutives. J'ai ensuite analysé les données des évaluations et écrit l'article que j'ai présenté à la revue 'Medical Teacher'. Après quelques corrections et précisions demandées par le comité de lecture, l'article a été accepté et publié. Ce travail a débouché sur une seconde version de l'application web qui est actuellement utilisée lors du module 3.1 de 3è année à l'Ecole de Médecine à Lausanne. Summary : Since the early days of sexual selection, our understanding of the selective forces acting on males and females during reproduction has increased remarkably. However, despite a long tradition of experimental and theoretical work in this field and relentless effort, numerous questions remain unanswered and many results are conflicting. Moreover, the interface between sexual selection and conservation biology has to date received little attention, despite existing evidence for its importance. In the present thesis, I first used an empirical approach to test various sexual selection hypotheses in a population of whitefish of central Switzerland. This precise population is characterized by a high prevalence of gonadal alterations in males. In particular, I challenged the hypothesis that whitefish males displaying peculiar gonadal features are of lower genetic quality than other seemingly normal males. Additionally, I also worked on identifying important determinant of sperm behavior. During a second theoretical part of my work, which is part of a larger project on the evolution of female mate preferences in harvested fish populations, I developed an individual-based simulation model to estimate how different mate discrimination costs affect the demographical behavior of fish populations and the evolutionary trajectories of female mate preferences. This latter work provided me with some insight on a recently published article addressing the importance of sexual selection for harvesting-induced evolution. I built upon this insight in a short perspective paper. In parallel, I let some methodological questions drive my thoughts, and wrote an essay about possible synergies between the biological, the philosophical and the statistical approach to biological questions.
Resumo:
Preface In this thesis we study several questions related to transaction data measured at an individual level. The questions are addressed in three essays that will constitute this thesis. In the first essay we use tick-by-tick data to estimate non-parametrically the jump process of 37 big stocks traded on the Paris Stock Exchange, and of the CAC 40 index. We separate the total daily returns in three components (trading continuous, trading jump, and overnight), and we characterize each one of them. We estimate at the individual and index levels the contribution of each return component to the total daily variability. For the index, the contribution of jumps is smaller and it is compensated by the larger contribution of overnight returns. We test formally that individual stocks jump more frequently than the index, and that they do not respond independently to the arrive of news. Finally, we find that daily jumps are larger when their arrival rates are larger. At the contemporaneous level there is a strong negative correlation between the jump frequency and the trading activity measures. The second essay study the general properties of the trade- and volume-duration processes for two stocks traded on the Paris Stock Exchange. These two stocks correspond to a very illiquid stock and to a relatively liquid stock. We estimate a class of autoregressive gamma process with conditional distribution from the family of non-central gamma (up to a scale factor). This process was introduced by Gouriéroux and Jasiak and it is known as Autoregressive gamma process. We also evaluate the ability of the process to fit the data. For this purpose we use the Diebold, Gunther and Tay (1998) test; and the capacity of the model to reproduce the moments of the observed data, and the empirical serial correlation and the partial serial correlation functions. We establish that the model describes correctly the trade duration process of illiquid stocks, but have problems to adjust correctly the trade duration process of liquid stocks which present long-memory characteristics. When the model is adjusted to volume duration, it successfully fit the data. In the third essay we study the economic relevance of optimal liquidation strategies by calibrating a recent and realistic microstructure model with data from the Paris Stock Exchange. We distinguish the case of parameters which are constant through the day from time-varying ones. An optimization problem incorporating this realistic microstructure model is presented and solved. Our model endogenizes the number of trades required before the position is liquidated. A comparative static exercise demonstrates the realism of our model. We find that a sell decision taken in the morning will be liquidated by the early afternoon. If price impacts increase over the day, the liquidation will take place more rapidly.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
The student´s screening made by schools corresponds to a regulatory mechanism for school inclusion and exclusion that normally overlaps the parental expectations of school choice. Based in "Parents survey 2006" data (n=188.073) generated by the Chilean Educational Ministry, this paper describe the parents reasons for choosing their children's school, and school´s criteria for screening students. It concludes that the catholic schools are the most selective institutions and usually exceed the capacity of parental choice. One of the reasons to select students would be the direct relationship between this practice and increasing the average score on the test of the Chilean Educational Quality Measurement System (SIMCE).
Resumo:
Relationships between porosity and hydraulic conductivity tend to be strongly scale- and site-dependent and are thus very difficult to establish. As a result, hydraulic conductivity distributions inferred from geophysically derived porosity models must be calibrated using some measurement of aquifer response. This type of calibration is potentially very valuable as it may allow for transport predictions within the considered hydrological unit at locations where only geophysical measurements are available, thus reducing the number of well tests required and thereby the costs of management and remediation. Here, we explore this concept through a series of numerical experiments. Considering the case of porosity characterization in saturated heterogeneous aquifers using crosshole ground-penetrating radar and borehole porosity log data, we use tracer test measurements to calibrate a relationship between porosity and hydraulic conductivity that allows the best prediction of the observed hydrological behavior. To examine the validity and effectiveness of the obtained relationship, we examine its performance at alternate locations not used in the calibration procedure. Our results indicate that this methodology allows us to obtain remarkably reliable hydrological predictions throughout the considered hydrological unit based on the geophysical data only. This was also found to be the case when significant uncertainty was considered in the underlying relationship between porosity and hydraulic conductivity.
Resumo:
Several unit root tests in panel data have recently been proposed. The test developed by Harris and Tzavalis (1999 JoE) performs particularly well when the time dimension is moderate in relation to the cross-section dimension. However, in common with the traditional tests designed for the unidimensional case, it was found to perform poorly when there is a structural break in the time series under the alternative. Here we derive the asymptotic distribution of the test allowing for a shift in the mean, and assess the small sample performance. We apply this new test to show how the hypothesis of (perfect) hysteresis in Spanish unemployment is rejected in favour of the alternative of the natural unemployment rate, when the possibility of a change in the latter is considered.
Resumo:
This paper tests some hypothesis about the determinants of the local tax structure. In particular, we focus on the effects that the property tax deductibility in the national income tax has on the relative use of the property tax and user charges. We deal with the incentive effects that local governments face regarding the different sources of revenue by means of a model in which the local tax structure and the level of public expenditure arise as a result of the maximizing behaviour of local politicians subject to the economic effects of the tax system. We attempt to test the hypothesis developed with data corresponding to a set of Spanish municipalities during the period 1987-9l. We find that tax deductibility provides incentives to raise revenues from the property tax but does not introduce a biass against user charges or in favor of overall spending growth
Resumo:
OBJECTIVE: To test the hypothesis that calcium pyrophosphate dihydrate (CPPD) deposition disease is a risk factor for neck pain. METHODS: A prevalent case-control study was conducted to assess cervical calcifications and neck pain between patients with and without known peripheral CPPD deposition disease. CPPD cases were included if diagnosed with CPPD deposition disease of peripheral joints, and excluded if their chief complaint was neck pain. Controls were randomly selected among consecutive patients, hospitalized for conditions unrelated to CPPD deposition disease or neck pain, and matched to CPPD cases by age and sex. Cervical calcifications were assessed by lateral cervical radiographs and computed tomography scans of the upper cervical spine; neck pain and cervical function were appraised by a validated questionnaire. RESULTS: Cervical calcifications were found in 24 out of 35 patients (69%) in the CPPD group compared to 4 out of 35 patients (11%) in the control group (p < 0.001). Patients with CPPD deposition disease reported significantly more neck pain and discomfort than controls (p < 0.001), and were 5 times more likely to report any neck pain (odds ratio 5.5; 95% confidence interval: 1.9, 21.9). Among male patients, more extensive cervical calcified deposits correlated with more severe neck pain (rs = 0.58, p = 0.03). CONCLUSION: These results suggest that CPPD deposition disease frequently involves the cervical spine and may be associated with the development of neck pain.
Resumo:
This paper presents a thermal modeling for power management of a new three-dimensional (3-D) thinned dies stacking process. Besides the high concentration of power dissipating sources, which is the direct consequence of the very interesting integration efficiency increase, this new ultra-compact packaging technology can suffer of the poor thermal conductivity (about 700 times smaller than silicon one) of the benzocyclobutene (BCB) used as both adhesive and planarization layers in each level of the stack. Thermal simulation was conducted using three-dimensional (3-D) FEM tool to analyze the specific behaviors in such stacked structure and to optimize the design rules. This study first describes the heat transfer limitation through the vertical path by examining particularly the case of the high dissipating sources under small area. First results of characterization in transient regime by means of dedicated test device mounted in single level structure are presented. For the design optimization, the thermal draining capabilities of a copper grid or full copper plate embedded in the intermediate layer of stacked structure are evaluated as a function of the technological parameters and the physical properties. It is shown an interest for the transverse heat extraction under the buffer devices dissipating most the power and generally localized in the peripheral zone, and for the temperature uniformization, by heat spreading mechanism, in the localized regions where the attachment of the thin die is altered. Finally, all conclusions of this analysis are used for the quantitative projections of the thermal performance of a first demonstrator based on a three-levels stacking structure for space application.
Resumo:
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.
Resumo:
Several unit root tests in panel data have recently been proposed. The test developed by Harris and Tzavalis (1999 JoE) performs particularly well when the time dimension is moderate in relation to the cross-section dimension. However, in common with the traditional tests designed for the unidimensional case, it was found to perform poorly when there is a structural break in the time series under the alternative. Here we derive the asymptotic distribution of the test allowing for a shift in the mean, and assess the small sample performance. We apply this new test to show how the hypothesis of (perfect) hysteresis in Spanish unemployment is rejected in favour of the alternative of the natural unemployment rate, when the possibility of a change in the latter is considered.
Resumo:
This paper tests some hypothesis about the determinants of the local tax structure. In particular, we focus on the effects that the property tax deductibility in the national income tax has on the relative use of the property tax and user charges. We deal with the incentive effects that local governments face regarding the different sources of revenue by means of a model in which the local tax structure and the level of public expenditure arise as a result of the maximizing behaviour of local politicians subject to the economic effects of the tax system. We attempt to test the hypothesis developed with data corresponding to a set of Spanish municipalities during the period 1987-9l. We find that tax deductibility provides incentives to raise revenues from the property tax but does not introduce a biass against user charges or in favor of overall spending growth
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.