972 resultados para Residual-Based Cointegration Test


Relevância:

100.00% 100.00%

Publicador:

Resumo:

针对欧式距离下局部嵌入映射近邻点选择的缺陷,从其定义出发,引入切空间距离,改进了近邻点的选择方法,从而能够更好的满足LLE对于局部线性的要求。论文用剩余方差测试其性能,通过对S-curve数据和Swiss-roll数据的仿真可以看到,基于切空间距离的方法能够更好的表示数据的输入/输出映射质量。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the design and implementation of a measurement-based QoS and resource management framework, CNQF (Converged Networks’ QoS Management Framework). CNQF is designed to provide unified, scalable QoS control and resource management through the use of a policy-based network
management paradigm. It achieves this via distributed functional entities that are deployed to co-ordinate the resources of the transport network through centralized policy-driven decisions supported by measurement-based control architecture. We present the CNQF architecture, implementation of the
prototype and validation of various inbuilt QoS control mechanisms using real traffic flows on a Linux-based experimental test bed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of Rhizoctonia solani, R. oryzae and R. oryzae-sativae, components of the rice sheath disease complex, is extremely difficult and often inaccurate and as a result may hinder the success of extensive breeding programmes throughout Asia. In this study, primers designed from unique regions within the rDNA internal transcribed spacers have been used to develop a rapid PCR-based diagnostic test to provide an accurate identification of the species on rice. Tests on the specificity of the primers concerned showed that they provide the means for accurate identification of the Rhizoctonia species responsible for sheath diseases in rice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ma thèse est composée de trois essais sur l'inférence par le bootstrap à la fois dans les modèles de données de panel et les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peut être faible. La théorie asymptotique n'étant pas toujours une bonne approximation de la distribution d'échantillonnage des estimateurs et statistiques de tests, je considère le bootstrap comme une alternative. Ces essais tentent d'étudier la validité asymptotique des procédures bootstrap existantes et quand invalides, proposent de nouvelles méthodes bootstrap valides. Le premier chapitre #co-écrit avec Sílvia Gonçalves# étudie la validité du bootstrap pour l'inférence dans un modèle de panel de données linéaire, dynamique et stationnaire à effets fixes. Nous considérons trois méthodes bootstrap: le recursive-design bootstrap, le fixed-design bootstrap et le pairs bootstrap. Ces méthodes sont des généralisations naturelles au contexte des panels des méthodes bootstrap considérées par Gonçalves et Kilian #2004# dans les modèles autorégressifs en séries temporelles. Nous montrons que l'estimateur MCO obtenu par le recursive-design bootstrap contient un terme intégré qui imite le biais de l'estimateur original. Ceci est en contraste avec le fixed-design bootstrap et le pairs bootstrap dont les distributions sont incorrectement centrées à zéro. Cependant, le recursive-design bootstrap et le pairs bootstrap sont asymptotiquement valides quand ils sont appliqués à l'estimateur corrigé du biais, contrairement au fixed-design bootstrap. Dans les simulations, le recursive-design bootstrap est la méthode qui produit les meilleurs résultats. Le deuxième chapitre étend les résultats du pairs bootstrap aux modèles de panel non linéaires dynamiques avec des effets fixes. Ces modèles sont souvent estimés par l'estimateur du maximum de vraisemblance #EMV# qui souffre également d'un biais. Récemment, Dhaene et Johmans #2014# ont proposé la méthode d'estimation split-jackknife. Bien que ces estimateurs ont des approximations asymptotiques normales centrées sur le vrai paramètre, de sérieuses distorsions demeurent à échantillons finis. Dhaene et Johmans #2014# ont proposé le pairs bootstrap comme alternative dans ce contexte sans aucune justification théorique. Pour combler cette lacune, je montre que cette méthode est asymptotiquement valide lorsqu'elle est utilisée pour estimer la distribution de l'estimateur split-jackknife bien qu'incapable d'estimer la distribution de l'EMV. Des simulations Monte Carlo montrent que les intervalles de confiance bootstrap basés sur l'estimateur split-jackknife aident grandement à réduire les distorsions liées à l'approximation normale en échantillons finis. En outre, j'applique cette méthode bootstrap à un modèle de participation des femmes au marché du travail pour construire des intervalles de confiance valides. Dans le dernier chapitre #co-écrit avec Wenjie Wang#, nous étudions la validité asymptotique des procédures bootstrap pour les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peu être faible. Nous montrons analytiquement qu'un bootstrap standard basé sur les résidus et le bootstrap restreint et efficace #RE# de Davidson et MacKinnon #2008, 2010, 2014# ne peuvent pas estimer la distribution limite de l'estimateur du maximum de vraisemblance à information limitée #EMVIL#. La raison principale est qu'ils ne parviennent pas à bien imiter le paramètre qui caractérise l'intensité de l'identification dans l'échantillon. Par conséquent, nous proposons une méthode bootstrap modifiée qui estime de facon convergente cette distribution limite. Nos simulations montrent que la méthode bootstrap modifiée réduit considérablement les distorsions des tests asymptotiques de type Wald #$t$# dans les échantillons finis, en particulier lorsque le degré d'endogénéité est élevé.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ripple effect of house prices within metropolitan areas has recently been recognised by researchers. However, it is very difficult to formulate and measure this effect using conventional house price theories particularly in consideration of the spatial locations of cities. Based on econometrics principles of the cointegration test and the error correction model, this research develops an innovative approach to quantitatively examine the diffusion patterns of house prices in mega-cities of a country. Taking Australia's eight capital cities as an example, the proposed approach is validated in terms of an empirical study. The results show that a 1-1-2-4 diffusion pattern exists within these cities. Sydney is on the top tier with Melbourne in the second; Perth and Adelaide are in the third level and the other four cities lie on the bottom. This research may be applied to predict the regional housing market behavior in a country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ripple effect of house prices within metropolitan areas has recently been recognised by researchers. However, it is very difficult to formulate and measure this effect using conventional house price theories particularly in consideration of the spatial locations of cities. Based on econometrics principles of the cointegration test and the error correction model, this research develops an innovative approach to quantitatively examine the diffusion patterns of house prices in mega-cities of a country. Taking Australia's eight capital cities as an example, the proposed approach is validated in terms of an empirical study. The results show that a 1-1-2-4 diffusion pattern exists within these cities. Sydney is on the top tier with Melbourne in the second; Perth and Adelaide are in the third level and the other four cities lie on the bottom. This research may be applied to predict the regional housing market behavior in a country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The measurement of fibre quality in bast fibres is related to the amount of gum (lignin, hemicellulose, wax and pectin) left in the fibre after the retting process. Large amounts of gums present represent poor separation of fibre. Efficiency of retting can be monitored by measuring the residual gum content of the retted fibre.

This paper investigated the use of ultrasonic vibration combined with chemical retting as a pre-treatment to improve accuracy of traditional residual gum content test. Work was conducted on chemically retted hemp fibre. Pre-treatment conditions were analysed by determining the best chemical combination, chemical concentration and treatment time. Fibres were examined for successful separation using optical microscopy and optical fibre diameter analysis (OFDA). The work proposed a new method for determining the residual gum content of hemp fibre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines whether the Australian equity market is integrated with the equity markets of the G7 economies by applying both the Johansen (Statistical analysis of conintegrating vectors, Journal of Economic Dynamics and Control, 12, 231-54, 1988) and Gregory and Hansen (Residual-based tests for cointegration in models with regime shifts, Journal of Econometrics, 70, 99-126, 1996) approaches to cointegration. Some evidence of a pairwise long-run relationship between the Australian stock market and the stock markets of Canada, Italy, Japan and the United Kingdom is found, but the Australian equity market is not pairwise cointegrated with the equity markets of France, Germany or the USA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examined the effects of game situation information, manipulated in terms of time and score, on decisions made in a video-based perceptual test in basketball. The participants were undergraduate university students (n=159) who viewed 21 offensive basketball plays, under two test conditions (low decision criticality; high decision criticality). To manipulate the conditions, prior to each clip, the
participants were presented with a description of the remaining time and score differential. High decision criticality situations were characterised by a remaining time of 60 seconds or less and score differentials of 2 points or less. Low decision criticality situations were characterised by remaining time of 5 minutes or more and score differentials of 5 points or more. The participants indicated their decision (pass, shoot, dribble) after the visual display had been occluded for each clip. The results indicated that decision profiles differed under the low and high decision criticality conditions. More pass decisions were made under high decision criticality situations and more shoot decisions under low decision criticality situations. These variations differed according to the type of main sport played but not for the basketball competition level. It was concluded that game situation information does influence decision making and should be considered in video-based testing and training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Produtos estruturados é uma combinação de ativos que inclui uma renda fixa e um ou mais derivativos embutidos. No Brasil, como ainda não existe uma regulamentação específica como nos Estados Unidos e Europa, a comercialização destes produtos é feita, principalmente, via Fundos de Investimentos Estruturados. O objetivo deste trabalho é avaliar se existe uma sobrevalorização na emissão de Fundos de Investimentos Estruturados. Para isso, calculou-se a diferença entre o preço de emissão e o preço teórico. Este preço teórico foi calculado sintetizando uma carteira composta de um componente renda fixa e os derivativos embutidos, valorizando-se os dois componentes com base na mesma metodologia abordada em publicações nacionais e internacionais. Foram analisados 40 fundos de Investimentos Fechados com emissão entre 2006 e 2011, observando-se que há indícios de uma diferença de preços, conclusão similar aos demais trabalhos que analisaram o tema. Esta diferença de preços encontrada pode ser explicada pelos custos de desenvolvimento dos produtos, pelos custos de hedge das operações e pelo fato dos pequenos investidores não terem acesso a este mercado diretamente. Adicionalmente, analisou-se a existência de uma relação de longo prazo entre as variáveis volatilidade e a diferença de preços encontrada. Através do Teste de Cointegração foi observado que existe uma tendência de longo prazo entre as variáveis. A Decomposição das Variâncias demonstra que as variações de margem são explicadas pelas variações na volatilidade e, por fim, o Teste da Causalidade de Granger indica que as variações da margem precedem as variações da volatilidade estimada. Com este resultado, espera-se contribuir para aumentar a transparência do mercado ao ilustrar a sofisticação das estruturas e, também, contribuir para o debate nas discussões sobre a nova regulamentação dos produtos estruturados que o Banco Central está em via de definir.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a residual based test where the null hypothesis of c:&InOvement between two processes with local persistenc~ can be tested, even under the presence of an endogenous regressor. It, therefore, fills in an existing lacuna in econometrics, in which longrun relationships can also be tested if the dependent and independent variables do not have a unit root, but do exhibit local persistence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Foram estimados parâmetros genéticos para produção de leite no dia do controle e produção acumulada até 305 dias (P305) de primeiras lactações de vacas da raça Caracu. O modelo animal considerado conteve efeito genético aditivo, como aleatório, além dos efeitos fixos de grupo contemporâneo e da idade ao parto, como covariável. Foram definidos dois grupos contemporâneos para explicar a variação ocorrida nas produções em cada controle leiteiro, compostos por ano, semana do controle e retiro (gc1) ou ano, mês do controle e retiro (gc2). Os componentes de variância foram estimados pelo método da máxima verossimilhança restrita. As variâncias fenotípicas, residuais e genéticas foram maiores no início da lactação, entretanto, a variância genética aditiva foi proporcionalmente menor em relação às demais variâncias. As estimativas de herdabilidade oscilaram entre 0,09 e 0,32 e foram maiores no final da lactação, indicando maior variabilidade genética nesse período. As correlações genéticas (r a) entre as produções em cada controle foram positivas e maiores, quanto menor a distância entre eles. A herdabilidade para P305 foi de 0,27 e as r a desta com os controles, positivas e elevadas, principalmente no meio da lactação. Os resultados indicam que a seleção direta para P305 proporciona maiores ganhos genéticos para essa característica que a obtida por meio de resposta correlacionada para as produções em cada controle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo neste estudo foi determinar o melhor período de avaliação para medidas de desempenho, consumo e eficiência alimentar. Durante 112 dias, 60 machos da raça Nelore, recém-desmamados, submetidos à prova de ganho de peso, foram alimentados em baias individuais para determinação do consumo alimentar e do desempenho. O peso corporal dos animais foi determinado a cada 28 dias, depois de jejum de 16 horas de líquidos e sólidos. As alterações na variância, variância relativa e correlações de Pearson e Spearman entre os dados dos períodos de avaliação reduzidos (28, 56 e 84 dias) e período total (112 dias) foram usados para determinar a melhor duração do período de avaliação. A duração do período de avaliação para ganho médio diário, consumo de matéria seca, conversão alimentar e consumo alimentar residual pode ser reduzida para 84, 28, 84 e 84 dias, respectivamente, pois tal redução não diminui significativamente a confiabilidade das avaliações em animais alimentados em baias individuais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.