972 resultados para HYPOTHESIS TESTS
Resumo:
A good portfolio structure enables an investor to diversify more effectively and understand systematic influences on their performance. However, in the property market, the choice of structure is affected by data constraints and convenience. Using individual return data, this study tests the hypothesis that some common structures in the UK do not explain a significant amount about property returns. It is found that, in the periods studied, not all the structures were effective and, for the annual returns, no structures were significant in all periods. The results suggest that the drivers represented by the structures take some time to be reflected in individual property returns. They also confirm the results of other studies in finding property type a much stronger factor in explaining returns than regions.
Resumo:
The applicability of AI methods to the Chagas' disease diagnosis is carried out by the use of Kohonen's self-organizing feature maps. Electrodiagnosis indicators calculated from ECG records are used as features in input vectors to train the network. Cross-validation results are used to modify the maps, providing an outstanding improvement to the interpretation of the resulting output. As a result, the map might be used to reduce the need for invasive explorations in chronic Chagas' disease.
Resumo:
The principal driver of nitrogen (N) losses from the body including excretion and secretion in milk is N intake. However, other covariates may also play a role in modifying the partitioning of N. This study tests the hypothesis that N partitioning in dairy cows is affected by energy and protein interactions. A database containing 470 dairy cow observations was collated from calorimetry experiments. The data include N and energy parameters of the diet and N utilization by the animal. Univariate and multivariate meta-analyses that considered both within and between study effects were conducted to generate prediction equations based on N intake alone or with an energy component. The univariate models showed that there was a strong positive linear relationships between N intake and N excretion in faeces, urine and milk. The slopes were 0.28 faeces N, 0.38 urine N and 0.20 milk N. Multivariate model analysis did not improve the fit. Metabolizable energy intake had a significant positive effect on the amount of milk N in proportion to faeces and urine N, which is also supported by other studies. Another measure of energy considered as a covariate to N intake was diet quality or metabolizability (the concentration of metabolizable energy relative to gross energy of the diet). Diet quality also had a positive linear relationship with the proportion of milk N relative to N excreted in faeces and urine. Metabolizability had the largest effect on faeces N due to lower protein digestibility of low quality diets. Urine N was also affected by diet quality and the magnitude of the effect was higher than for milk N. This research shows that including a measure of diet quality as a covariate with N intake in a model of N execration can enhance our understanding of the effects of diet composition on N losses from dairy cows. The new prediction equations developed in this study could be used to monitor N losses from dairy systems.
Resumo:
In this paper we examine the order of integration of EuroSterling interest rates by employing techniques that can allow for a structural break under the null and/or alternative hypothesis of the unit-root tests. In light of these results, we investigate the cointegrating relationship implied by the single, linear expectations hypothesis of the term structure of interest rates employing two techniques, one of which allows for the possibility of a break in the mean of the cointegrating relationship. The aim of the paper is to investigate whether or not the interest rate series can be viewed as I(1) processes and furthermore, to consider whether there has been a structural break in the series. We also determine whether, if we allow for a break in the cointegration analysis, the results are consistent with those obtained when a break is not allowed for. The main results reported in this paper support the conjecture that the ‘short’ Euro-currency rates are characterised as I(1) series that exhibit a structural break on or near Black Wednesday, 16 September 1992, whereas the ‘long’ rates are I(1) series that do not support the presence of a structural break. The evidence from the cointegration analysis suggests that tests of the expectations hypothesis based on data sets that include the ERM crisis period, or a period that includes a structural break, might be problematic if the structural break is not explicitly taken into account in the testing framework.
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Gardner's popular model of perfect competition in the marketing sector is extended to a conjectural-variations oligopoly with endogenous entry. Revising Gardner's comparative statics on the "farm-retail price ratio," tests of hypotheses about food industry conduct are derived. Using data from a recent article by Wohlgenant, which employs Gardner's framework, tests are made of the validity of his maintained hypothesis-that the food industries are perfectly competitive. No evidence is found of departures from competition in the output markets of the food industries of eight commodity groups: (a) beef and veal, (b) pork, (c) poultry, (d) eggs, (e) dairy, (f) processed fruits and vegetables, (g) fresh fruit, and (h) fresh vegetables.
Resumo:
Blood clotting response (BCR) resistance tests are available for a number of anticoagulant rodenticides. However, during the development of these tests many of the test parameters have been changed, making meaningful comparisons between results difficult. It was recognised that a standard methodology was urgently required for future BCR resistance tests and, accordingly, this document presents a reappraisal of published tests, and proposes a standard protocol for future use (see Appendix). The protocol can be used to provide information on the incidence and degree of resistance in a particular rodent population; to provide a simple comparison of resistance factors between active ingredients, thus giving clear information about cross-resistance for any given strain; and to provide comparisons of susceptibility or resistance between different populations. The methodology has a sound statistical basis in being based on the ED50 response, and requires many fewer animals than the resistance tests in current use. Most importantly, tests can be used to give a clear indication of the likely practical impact of the resistance on field efficacy. The present study was commissioned and funded by the Rodenticide Resistance Action Committee (RRAC) of CropLife International.
Resumo:
A situation assessment uses reports from sensors to produce hypotheses about a situation at a level of aggregation that is of direct interest to a military commander. A low level of aggregation could mean forming tracks from reports, which is well documented in the tracking literature as track initiation and data association. In this paper there is also discussion on higher level aggregation; assessing the membership of tracks to larger groups. Ideas used in joint tracking and identification are extended, using multi-entity Bayesian networks to model a number of static variables, of which the identity of a target is one. For higher level aggregation a scheme for hypothesis management is required. It is shown how an offline clustering of vehicles can be reduced to an assignment problem.
Resumo:
The behaviour of stationary, non-passive plumes can be simulated in a reasonably simple and accurate way by integral models. One of the key requirements of these models, but also one of their less well-founded aspects, is the entrainment assumption, which parameterizes turbulent mixing between the plume and the environment. The entrainment assumption developed by Schatzmann and adjusted to a set of experimental results requires four constants and an ad hoc hypothesis to eliminate undesirable terms. With this assumption, Schatzmann’s model exhibits numerical instability for certain cases of plumes with small velocity excesses, due to very fast radius growth. The purpose of this paper is to present an alternative entrainment assumption based on a first-order turbulence closure, which only requires two adjustable constants and seems to solve this problem. The asymptotic behaviour of the new formulation is studied and compared to previous ones. The validation tests presented by Schatzmann are repeated and it is found that the new formulation not only eliminates numerical instability but also predicts more plausible growth rates for jets in co-flowing streams.