888 resultados para international student mobility cross-section time series model Source country host country
Resumo:
In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.
Resumo:
Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.
Resumo:
A model for total cross-sections incorporating QCD jet cross-sections and soft gluon resummation is described and compared with present data on pp and pp cross-sections. Predictions for LHC are presented for different parameter sets. It is shown that they differ according to the small x-behaviour of available parton density functions.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
We present a measurement of the $WW+WZ$ production cross section observed in a final state consisting of an identified electron or muon, two jets, and missing transverse energy. The measurement is carried out in a data sample corresponding to up to 4.6~fb$^{-1}$ of integrated luminosity at $\sqrt{s} = 1.96$ TeV collected by the CDF II detector. Matrix element calculations are used to separate the diboson signal from the large backgrounds. The $WW+WZ$ cross section is measured to be $17.4\pm3.3$~pb, in agreement with standard model predictions. A fit to the dijet invariant mass spectrum yields a compatible cross section measurement.
Resumo:
We report a measurement of the single top quark production cross section in 2.2 ~fb-1 of p-pbar collision data collected by the Collider Detector at Fermilab at sqrt{s}=1.96 TeV. Candidate events are classified as signal-like by three parallel analyses which use likelihood, matrix element, and neural network discriminants. These results are combined in order to improve the sensitivity. We observe a signal consistent with the standard model prediction, but inconsistent with the background-only model by 3.7 standard deviations with a median expected sensitivity of 4.9 standard deviations. We measure a cross section of 2.2 +0.7 -0.6(stat+sys) pb, extract the CKM matrix element value |V_{tb}|=0.88 +0.13 -0.12 (stat+sys) +- 0.07(theory), and set the limit |V_{tb}|>0.66 at the 95% C.L.
Resumo:
We report two complementary measurements of the WW+WZ cross section in the final state consisting of an electron or muon, missing transverse energy, and jets, performed using p\bar{p} collision data at sqrt{s} = 1.96 TeV collected by the CDF II detector. The first method uses the dijet invariant mass distribution while the second more sensitive method uses matrix-element calculations. The result from the second method has a signal significance of 5.4 sigma and is the first observation of WW+WZ production using this signature. Combining the results gives sigma_{WW+WZ} = 16.0 +/- 3.3 pb, in agreement with the standard model prediction.
Resumo:
We report a measurement of the ratio of the tt̅ to Z/γ* production cross sections in √s=1.96 TeV pp̅ collisions using data corresponding to an integrated luminosity of up to 4.6 fb-1, collected by the CDF II detector. The tt̅ cross section ratio is measured using two complementary methods, a b-jet tagging measurement and a topological approach. By multiplying the ratios by the well-known theoretical Z/γ*→ll cross section predicted by the standard model, the extracted tt̅ cross sections are effectively insensitive to the uncertainty on luminosity. A best linear unbiased estimate is used to combine both measurements with the result σtt̅ =7.70±0.52 pb, for a top-quark mass of 172.5 GeV/c2.
Resumo:
We report a measurement of the ratio of the tt̅ to Z/γ* production cross sections in √s=1.96 TeV pp̅ collisions using data corresponding to an integrated luminosity of up to 4.6 fb-1, collected by the CDF II detector. The tt̅ cross section ratio is measured using two complementary methods, a b-jet tagging measurement and a topological approach. By multiplying the ratios by the well-known theoretical Z/γ*→ll cross section predicted by the standard model, the extracted tt̅ cross sections are effectively insensitive to the uncertainty on luminosity. A best linear unbiased estimate is used to combine both measurements with the result σtt̅ =7.70±0.52 pb, for a top-quark mass of 172.5 GeV/c2.
Resumo:
We report a measurement of the single top quark production cross section in 2.2 ~fb-1 of p-pbar collision data collected by the Collider Detector at Fermilab at sqrt{s}=1.96 TeV. Candidate events are classified as signal-like by three parallel analyses which use likelihood, matrix element, and neural network discriminants. These results are combined in order to improve the sensitivity. We observe a signal consistent with the standard model prediction, but inconsistent with the background-only model by 3.7 standard deviations with a median expected sensitivity of 4.9 standard deviations. We measure a cross section of 2.2 +0.7 -0.6(stat+sys) pb, extract the CKM matrix element value |V_{tb}|=0.88 +0.13 -0.12 (stat+sys) +- 0.07(theory), and set the limit |V_{tb}|>0.66 at the 95% C.L.
Resumo:
We present a measurement of the $\ttbar$ differential cross section with respect to the $\ttbar$ invariant mass, dSigma/dMttbar, in $\ppbar$ collisions at $\sqrt{s}=1.96$ TeV using an integrated luminosity of $2.7\invfb$ collected by the CDF II experiment. The $\ttbar$ invariant mass spectrum is sensitive to a variety of exotic particles decaying into $\ttbar$ pairs. The result is consistent with the standard model expectation, as modeled by \texttt{PYTHIA} with \texttt{CTEQ5L} parton distribution functions.
Resumo:
The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.
Evolution in the time series of vortex velocity fluctuations across different regimes of vortex flow
Resumo:
Investigations of vortex velocity fluctuation in time domain have revealed a presence of low frequency velocity fluctuations which evolve with the different driven phases of the vortex state in a single crystal of 2H-NbSe2. The observation of velocity fluctuations with a characteristic low frequency is associated with the onset of nonlinear nature of vortex flow deep in the driven elastic vortex state. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Numerical solutions are presented for the free convection boundary layers over cylinders of elliptic cross section embedded in a fluid-saturated porous medium. The transformed conservation equations of the nonsimilar boundary layers are solved numerically by an efficient finite-difference method. The theory was applied to a number of cylinders and the results compared very well with published analytical solutions. The results are of use in the design of underground electrical cables, power plant steam, and water distribution lines, among others.